Jan 26 00:09:14 crc systemd[1]: Starting Kubernetes Kubelet... Jan 26 00:09:15 crc kubenswrapper[5107]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 00:09:15 crc kubenswrapper[5107]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 26 00:09:15 crc kubenswrapper[5107]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 00:09:15 crc kubenswrapper[5107]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 00:09:15 crc kubenswrapper[5107]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 26 00:09:15 crc kubenswrapper[5107]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.717047 5107 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720599 5107 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720626 5107 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720633 5107 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720638 5107 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720644 5107 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720650 5107 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720655 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720661 5107 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720668 5107 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720674 5107 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720680 5107 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720685 5107 feature_gate.go:328] unrecognized feature gate: Example2 Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720690 5107 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720696 5107 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720702 5107 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720709 5107 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720714 5107 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720721 5107 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720727 5107 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720734 5107 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720741 5107 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720748 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720754 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720761 5107 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720767 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720773 5107 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720780 5107 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720787 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720793 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720799 5107 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720805 5107 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720811 5107 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720816 5107 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720822 5107 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720827 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720835 5107 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720841 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720847 5107 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720852 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720858 5107 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720864 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720871 5107 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720876 5107 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720881 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720906 5107 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720911 5107 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.720918 5107 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721104 5107 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721109 5107 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721114 5107 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721119 5107 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721125 5107 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721132 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721139 5107 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721148 5107 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721155 5107 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721162 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721168 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721175 5107 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721181 5107 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721187 5107 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721193 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721199 5107 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721205 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721212 5107 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721218 5107 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721224 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721230 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721236 5107 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721242 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721248 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721253 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721261 5107 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721268 5107 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721275 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721281 5107 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721287 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721293 5107 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721299 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721306 5107 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721313 5107 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721319 5107 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721324 5107 feature_gate.go:328] unrecognized feature gate: Example Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721328 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721333 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.721338 5107 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.726697 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727272 5107 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727282 5107 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727297 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727303 5107 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727309 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727314 5107 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727319 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727324 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727329 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727333 5107 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727338 5107 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727342 5107 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727346 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727350 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727358 5107 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727367 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727371 5107 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727376 5107 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727381 5107 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727386 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727393 5107 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727397 5107 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727402 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727408 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727412 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727417 5107 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727421 5107 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727426 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727436 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727443 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727448 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727455 5107 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727464 5107 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727471 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727477 5107 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727485 5107 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727495 5107 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727507 5107 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727519 5107 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727529 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727535 5107 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727541 5107 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727546 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727552 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727557 5107 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727563 5107 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727568 5107 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727574 5107 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727579 5107 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727584 5107 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727589 5107 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727595 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727599 5107 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727602 5107 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727606 5107 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727619 5107 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727623 5107 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727627 5107 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727630 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727634 5107 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727637 5107 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727641 5107 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727644 5107 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727649 5107 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727655 5107 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727659 5107 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727663 5107 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727667 5107 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727670 5107 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727674 5107 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727677 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727680 5107 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727688 5107 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727691 5107 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727695 5107 feature_gate.go:328] unrecognized feature gate: Example2 Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727698 5107 feature_gate.go:328] unrecognized feature gate: Example Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727707 5107 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727711 5107 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727715 5107 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727718 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727722 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727726 5107 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727730 5107 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727734 5107 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.727737 5107 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728322 5107 flags.go:64] FLAG: --address="0.0.0.0" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728351 5107 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728372 5107 flags.go:64] FLAG: --anonymous-auth="true" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728380 5107 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728394 5107 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728400 5107 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728408 5107 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728417 5107 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728422 5107 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728427 5107 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728433 5107 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728438 5107 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728446 5107 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728453 5107 flags.go:64] FLAG: --cgroup-root="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728458 5107 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728463 5107 flags.go:64] FLAG: --client-ca-file="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728467 5107 flags.go:64] FLAG: --cloud-config="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728471 5107 flags.go:64] FLAG: --cloud-provider="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728475 5107 flags.go:64] FLAG: --cluster-dns="[]" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728480 5107 flags.go:64] FLAG: --cluster-domain="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728484 5107 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728490 5107 flags.go:64] FLAG: --config-dir="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728495 5107 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728500 5107 flags.go:64] FLAG: --container-log-max-files="5" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728507 5107 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728513 5107 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728518 5107 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728524 5107 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728531 5107 flags.go:64] FLAG: --contention-profiling="false" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728540 5107 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728546 5107 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728552 5107 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728557 5107 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728568 5107 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728581 5107 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728589 5107 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728595 5107 flags.go:64] FLAG: --enable-load-reader="false" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728600 5107 flags.go:64] FLAG: --enable-server="true" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728609 5107 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728620 5107 flags.go:64] FLAG: --event-burst="100" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728625 5107 flags.go:64] FLAG: --event-qps="50" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728630 5107 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728635 5107 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728639 5107 flags.go:64] FLAG: --eviction-hard="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728647 5107 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728652 5107 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728661 5107 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728667 5107 flags.go:64] FLAG: --eviction-soft="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728672 5107 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728677 5107 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728682 5107 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728686 5107 flags.go:64] FLAG: --experimental-mounter-path="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728691 5107 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728696 5107 flags.go:64] FLAG: --fail-swap-on="true" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728701 5107 flags.go:64] FLAG: --feature-gates="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728709 5107 flags.go:64] FLAG: --file-check-frequency="20s" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728714 5107 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728719 5107 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728724 5107 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728730 5107 flags.go:64] FLAG: --healthz-port="10248" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728741 5107 flags.go:64] FLAG: --help="false" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728750 5107 flags.go:64] FLAG: --hostname-override="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728755 5107 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.728764 5107 flags.go:64] FLAG: --http-check-frequency="20s" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729282 5107 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729292 5107 flags.go:64] FLAG: --image-credential-provider-config="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729298 5107 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729303 5107 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729307 5107 flags.go:64] FLAG: --image-service-endpoint="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729311 5107 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729316 5107 flags.go:64] FLAG: --kube-api-burst="100" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729321 5107 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729329 5107 flags.go:64] FLAG: --kube-api-qps="50" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729334 5107 flags.go:64] FLAG: --kube-reserved="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729338 5107 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729344 5107 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729349 5107 flags.go:64] FLAG: --kubelet-cgroups="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729353 5107 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729358 5107 flags.go:64] FLAG: --lock-file="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729364 5107 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729370 5107 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729410 5107 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729421 5107 flags.go:64] FLAG: --log-json-split-stream="false" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729425 5107 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729429 5107 flags.go:64] FLAG: --log-text-split-stream="false" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729434 5107 flags.go:64] FLAG: --logging-format="text" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729438 5107 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729443 5107 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729447 5107 flags.go:64] FLAG: --manifest-url="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729455 5107 flags.go:64] FLAG: --manifest-url-header="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729463 5107 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729468 5107 flags.go:64] FLAG: --max-open-files="1000000" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729475 5107 flags.go:64] FLAG: --max-pods="110" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729479 5107 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729484 5107 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729489 5107 flags.go:64] FLAG: --memory-manager-policy="None" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729493 5107 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729500 5107 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729505 5107 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729510 5107 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729530 5107 flags.go:64] FLAG: --node-status-max-images="50" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729536 5107 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729542 5107 flags.go:64] FLAG: --oom-score-adj="-999" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729551 5107 flags.go:64] FLAG: --pod-cidr="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729557 5107 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729568 5107 flags.go:64] FLAG: --pod-manifest-path="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729573 5107 flags.go:64] FLAG: --pod-max-pids="-1" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729579 5107 flags.go:64] FLAG: --pods-per-core="0" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729583 5107 flags.go:64] FLAG: --port="10250" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729588 5107 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729593 5107 flags.go:64] FLAG: --provider-id="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729598 5107 flags.go:64] FLAG: --qos-reserved="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729607 5107 flags.go:64] FLAG: --read-only-port="10255" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729611 5107 flags.go:64] FLAG: --register-node="true" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729616 5107 flags.go:64] FLAG: --register-schedulable="true" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729621 5107 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729633 5107 flags.go:64] FLAG: --registry-burst="10" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729637 5107 flags.go:64] FLAG: --registry-qps="5" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729641 5107 flags.go:64] FLAG: --reserved-cpus="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729647 5107 flags.go:64] FLAG: --reserved-memory="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729653 5107 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729658 5107 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729663 5107 flags.go:64] FLAG: --rotate-certificates="false" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729667 5107 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729672 5107 flags.go:64] FLAG: --runonce="false" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729713 5107 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729718 5107 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729723 5107 flags.go:64] FLAG: --seccomp-default="false" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729730 5107 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729735 5107 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729740 5107 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729746 5107 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729751 5107 flags.go:64] FLAG: --storage-driver-password="root" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729757 5107 flags.go:64] FLAG: --storage-driver-secure="false" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729762 5107 flags.go:64] FLAG: --storage-driver-table="stats" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729766 5107 flags.go:64] FLAG: --storage-driver-user="root" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729770 5107 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729777 5107 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729782 5107 flags.go:64] FLAG: --system-cgroups="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729786 5107 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729795 5107 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729800 5107 flags.go:64] FLAG: --tls-cert-file="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729805 5107 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729814 5107 flags.go:64] FLAG: --tls-min-version="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729818 5107 flags.go:64] FLAG: --tls-private-key-file="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729823 5107 flags.go:64] FLAG: --topology-manager-policy="none" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729827 5107 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729832 5107 flags.go:64] FLAG: --topology-manager-scope="container" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729836 5107 flags.go:64] FLAG: --v="2" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729844 5107 flags.go:64] FLAG: --version="false" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729850 5107 flags.go:64] FLAG: --vmodule="" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729859 5107 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.729864 5107 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730132 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730142 5107 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730147 5107 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730152 5107 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730156 5107 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730159 5107 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730163 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730169 5107 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730173 5107 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730177 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730181 5107 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730185 5107 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730188 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730192 5107 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730198 5107 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730204 5107 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730209 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730213 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730218 5107 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730223 5107 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730229 5107 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730235 5107 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730239 5107 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730248 5107 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730252 5107 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730257 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730260 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730264 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730267 5107 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730271 5107 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730274 5107 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730278 5107 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730284 5107 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730288 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730291 5107 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730295 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730298 5107 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730302 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730305 5107 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730309 5107 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730313 5107 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730316 5107 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730319 5107 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730323 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730326 5107 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730332 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730336 5107 feature_gate.go:328] unrecognized feature gate: Example Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730340 5107 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730344 5107 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730347 5107 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730350 5107 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730354 5107 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730357 5107 feature_gate.go:328] unrecognized feature gate: Example2 Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730364 5107 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730368 5107 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730373 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730377 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730381 5107 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730387 5107 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730390 5107 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730394 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730397 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730402 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730405 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730409 5107 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730413 5107 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730416 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730420 5107 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730423 5107 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730428 5107 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730434 5107 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730437 5107 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730441 5107 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730445 5107 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730448 5107 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730452 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730455 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730459 5107 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730462 5107 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730467 5107 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730471 5107 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730474 5107 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730478 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730483 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730488 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.730498 5107 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.730734 5107 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.741339 5107 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.741445 5107 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741513 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741524 5107 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741531 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741536 5107 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741541 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741546 5107 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741550 5107 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741554 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741558 5107 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741563 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741567 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741572 5107 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741581 5107 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741586 5107 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741591 5107 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741596 5107 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741601 5107 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741607 5107 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741611 5107 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741615 5107 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741619 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741623 5107 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741627 5107 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741631 5107 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741636 5107 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741640 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741644 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741649 5107 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741652 5107 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741658 5107 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741665 5107 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741669 5107 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741673 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741678 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741682 5107 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741689 5107 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741694 5107 feature_gate.go:328] unrecognized feature gate: Example Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741700 5107 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741705 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741710 5107 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741715 5107 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741719 5107 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741724 5107 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741730 5107 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741734 5107 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741738 5107 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741743 5107 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741748 5107 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741752 5107 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741756 5107 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741761 5107 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741766 5107 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741770 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741774 5107 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741777 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741781 5107 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741784 5107 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741788 5107 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741791 5107 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741794 5107 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741797 5107 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741800 5107 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741804 5107 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741808 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741811 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741817 5107 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741821 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741825 5107 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741829 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741832 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741835 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741838 5107 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741842 5107 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741845 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741848 5107 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741851 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741854 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741858 5107 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741861 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741865 5107 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741868 5107 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741872 5107 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741875 5107 feature_gate.go:328] unrecognized feature gate: Example2 Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741878 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741899 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.741905 5107 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.741918 5107 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742091 5107 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742102 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742108 5107 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742113 5107 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742118 5107 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742124 5107 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742130 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742134 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742139 5107 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742143 5107 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742148 5107 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742154 5107 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742159 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742164 5107 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742169 5107 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742173 5107 feature_gate.go:328] unrecognized feature gate: Example2 Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742177 5107 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742182 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742186 5107 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742190 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742194 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742198 5107 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742203 5107 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742207 5107 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742211 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742217 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742222 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742227 5107 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742232 5107 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742236 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742240 5107 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742247 5107 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742252 5107 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742256 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742260 5107 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742265 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742271 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742277 5107 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742282 5107 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742287 5107 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742293 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742297 5107 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742302 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742307 5107 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742312 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742316 5107 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742320 5107 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742324 5107 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742327 5107 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742333 5107 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742338 5107 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742342 5107 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742346 5107 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742350 5107 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742354 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742358 5107 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742362 5107 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742366 5107 feature_gate.go:328] unrecognized feature gate: Example Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742373 5107 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742377 5107 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742381 5107 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742385 5107 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742390 5107 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742394 5107 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742399 5107 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742403 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742407 5107 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742412 5107 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742416 5107 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742422 5107 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742426 5107 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742431 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742435 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742439 5107 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742444 5107 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742448 5107 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742453 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742458 5107 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742462 5107 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742466 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742471 5107 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742475 5107 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742479 5107 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742484 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742488 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 26 00:09:15 crc kubenswrapper[5107]: W0126 00:09:15.742491 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.742498 5107 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.742852 5107 server.go:962] "Client rotation is on, will bootstrap in background" Jan 26 00:09:15 crc kubenswrapper[5107]: E0126 00:09:15.745331 5107 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.748817 5107 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.749017 5107 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.749647 5107 server.go:1019] "Starting client certificate rotation" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.749854 5107 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.749982 5107 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 26 00:09:15 crc kubenswrapper[5107]: E0126 00:09:15.784741 5107 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.203:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.880364 5107 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.882495 5107 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.901784 5107 log.go:25] "Validated CRI v1 runtime API" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.924635 5107 log.go:25] "Validated CRI v1 image API" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.926312 5107 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.930713 5107 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-01-26-00-02-55-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.930755 5107 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:46 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.982913 5107 manager.go:217] Machine: {Timestamp:2026-01-26 00:09:15.98144584 +0000 UTC m=+0.899040186 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649930240 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:d9c41fe3-854d-4f0f-b42d-bfcf817b111c BootID:066ffcb3-e507-457f-8c26-3fe6d538369f Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824967168 Type:vfs Inodes:1048576 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:46 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:06:2a:55 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:06:2a:55 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:7a:30:9b Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:3f:a6:2b Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:fa:1e:67 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:18:a0:3b Speed:-1 Mtu:1496} {Name:eth10 MacAddress:4e:a1:c7:af:fb:87 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:76:d1:20:87:ee:57 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649930240 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.983151 5107 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.983450 5107 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.984344 5107 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.984388 5107 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.984599 5107 topology_manager.go:138] "Creating topology manager with none policy" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.984610 5107 container_manager_linux.go:306] "Creating device plugin manager" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.984641 5107 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.984664 5107 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.985087 5107 state_mem.go:36] "Initialized new in-memory state store" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.985329 5107 server.go:1267] "Using root directory" path="/var/lib/kubelet" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.986468 5107 kubelet.go:491] "Attempting to sync node with API server" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.986491 5107 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.986512 5107 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.986528 5107 kubelet.go:397] "Adding apiserver pod source" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.986550 5107 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 26 00:09:15 crc kubenswrapper[5107]: E0126 00:09:15.988574 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 26 00:09:15 crc kubenswrapper[5107]: E0126 00:09:15.988963 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.989611 5107 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.989638 5107 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.990582 5107 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.990598 5107 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.992148 5107 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.992423 5107 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.992880 5107 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.993322 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.993347 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.993354 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.993361 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.993372 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.993384 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.993391 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.993398 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.993407 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.993422 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.993435 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.993575 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.993801 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.993816 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Jan 26 00:09:15 crc kubenswrapper[5107]: I0126 00:09:15.994538 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.203:6443: connect: connection refused Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.005631 5107 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.005705 5107 server.go:1295] "Started kubelet" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.005896 5107 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.005998 5107 server_v1.go:47] "podresources" method="list" useActivePods=true Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.006664 5107 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.006491 5107 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 26 00:09:16 crc systemd[1]: Started Kubernetes Kubelet. Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.026244 5107 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.026587 5107 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Jan 26 00:09:16 crc kubenswrapper[5107]: E0126 00:09:16.026762 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:16 crc kubenswrapper[5107]: E0126 00:09:16.028992 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.026808 5107 volume_manager.go:295] "The desired_state_of_world populator starts" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.029067 5107 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.026821 5107 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.032073 5107 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.032107 5107 factory.go:55] Registering systemd factory Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.032119 5107 factory.go:223] Registration of the systemd container factory successfully Jan 26 00:09:16 crc kubenswrapper[5107]: E0126 00:09:16.026594 5107 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.203:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e1f55933617fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:16.00566067 +0000 UTC m=+0.923255016,LastTimestamp:2026-01-26 00:09:16.00566067 +0000 UTC m=+0.923255016,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.026805 5107 server.go:317] "Adding debug handlers to kubelet server" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.032962 5107 factory.go:153] Registering CRI-O factory Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.033009 5107 factory.go:223] Registration of the crio container factory successfully Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.033062 5107 factory.go:103] Registering Raw factory Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.033094 5107 manager.go:1196] Started watching for new ooms in manager Jan 26 00:09:16 crc kubenswrapper[5107]: E0126 00:09:16.057271 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" interval="200ms" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.057582 5107 manager.go:319] Starting recovery of all containers Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.076820 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.076892 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.076905 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.076915 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.076924 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.076935 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.076947 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.076957 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.076967 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.076978 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.076987 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.076997 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077005 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077013 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077026 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077054 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077063 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077073 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077083 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077093 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077102 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077114 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077123 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077132 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077173 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077184 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077209 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077274 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077289 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077300 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077312 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077324 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077334 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077345 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077354 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077399 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077408 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077418 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077436 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077447 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077458 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077468 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077493 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077502 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077512 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077521 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077530 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077540 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077548 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077557 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077568 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077578 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077587 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077597 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077607 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077616 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077631 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077639 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077651 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077663 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077672 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077681 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077691 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077703 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077713 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.077723 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078413 5107 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078439 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078456 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078469 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078479 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078489 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078498 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078507 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078519 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078530 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078541 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078551 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078562 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078579 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078592 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078605 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078618 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078628 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078639 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078649 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078659 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078670 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078679 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078689 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078700 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078714 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078726 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078738 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078770 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078786 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078801 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078815 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078826 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078838 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078851 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078864 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078877 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078905 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078916 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078928 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078939 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078952 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078965 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078977 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.078988 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079001 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079013 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079231 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079247 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079260 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079272 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079284 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079296 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079432 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079447 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079459 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079471 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079485 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079498 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079509 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079519 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079532 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079542 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079552 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079566 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079576 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079587 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079597 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079608 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079620 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079630 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079639 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079649 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079660 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079670 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079680 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079691 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079703 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079713 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079723 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079733 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079746 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079756 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079769 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079780 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079790 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079801 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079813 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079823 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079834 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079844 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079855 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079865 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079879 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079901 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079912 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079922 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079933 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079942 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079953 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079965 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079974 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079983 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.079992 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080002 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080012 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080022 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080031 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080040 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080051 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080061 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080071 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080080 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080091 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080101 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080111 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080120 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080131 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080140 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080151 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080161 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080172 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080183 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080193 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080202 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080213 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080223 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080244 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080255 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080266 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080277 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080287 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080297 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080306 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080316 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080324 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080333 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080343 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080353 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080362 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080373 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080384 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080393 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080402 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080410 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080419 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080429 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080438 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080447 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080456 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080466 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080476 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080486 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080512 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080524 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080534 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080543 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080553 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080569 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080617 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080627 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080636 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080646 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080655 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080666 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080676 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080685 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080695 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080706 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080717 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080727 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080737 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080747 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080759 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080770 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080780 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080790 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080801 5107 reconstruct.go:97] "Volume reconstruction finished" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.080808 5107 reconciler.go:26] "Reconciler: start to sync state" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.100954 5107 manager.go:324] Recovery completed Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.109556 5107 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.111586 5107 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.111637 5107 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.111677 5107 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.111695 5107 kubelet.go:2451] "Starting kubelet main sync loop" Jan 26 00:09:16 crc kubenswrapper[5107]: E0126 00:09:16.111748 5107 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 26 00:09:16 crc kubenswrapper[5107]: E0126 00:09:16.113826 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 26 00:09:16 crc kubenswrapper[5107]: E0126 00:09:16.126996 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.129377 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.131118 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.131624 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.131640 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.132632 5107 cpu_manager.go:222] "Starting CPU manager" policy="none" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.132658 5107 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.132692 5107 state_mem.go:36] "Initialized new in-memory state store" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.138524 5107 policy_none.go:49] "None policy: Start" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.138581 5107 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.138614 5107 state_mem.go:35] "Initializing new in-memory state store" Jan 26 00:09:16 crc kubenswrapper[5107]: E0126 00:09:16.212531 5107 kubelet.go:2475] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 26 00:09:16 crc kubenswrapper[5107]: E0126 00:09:16.227878 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:16 crc kubenswrapper[5107]: E0126 00:09:16.259338 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" interval="400ms" Jan 26 00:09:16 crc kubenswrapper[5107]: E0126 00:09:16.328377 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:16 crc kubenswrapper[5107]: E0126 00:09:16.413330 5107 kubelet.go:2475] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 26 00:09:16 crc kubenswrapper[5107]: E0126 00:09:16.593360 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.610591 5107 manager.go:341] "Starting Device Plugin manager" Jan 26 00:09:16 crc kubenswrapper[5107]: E0126 00:09:16.610809 5107 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.610834 5107 server.go:85] "Starting device plugin registration server" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.611236 5107 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.611253 5107 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.611461 5107 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.611659 5107 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.611670 5107 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 26 00:09:16 crc kubenswrapper[5107]: E0126 00:09:16.615942 5107 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Jan 26 00:09:16 crc kubenswrapper[5107]: E0126 00:09:16.616015 5107 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:09:16 crc kubenswrapper[5107]: E0126 00:09:16.661404 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" interval="800ms" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.711650 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.713477 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.713568 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.713589 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.713641 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:16 crc kubenswrapper[5107]: E0126 00:09:16.714556 5107 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.203:6443: connect: connection refused" node="crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.814275 5107 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.814764 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.815971 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.816030 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.816053 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.817110 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.817457 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.817532 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.818020 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.818092 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.818108 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.818556 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.818599 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.818610 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.819157 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.819228 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.819257 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.819851 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.819910 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.819927 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.820381 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.820407 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.820420 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.821210 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.821408 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.821460 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.821777 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.821810 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.821824 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.822466 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.822500 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.822513 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.822617 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.822678 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.822716 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.823373 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.823381 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.823472 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.823498 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.823402 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.823583 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.825494 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.825588 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.826556 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.826598 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.826612 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:16 crc kubenswrapper[5107]: E0126 00:09:16.872498 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:16 crc kubenswrapper[5107]: E0126 00:09:16.881243 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:16 crc kubenswrapper[5107]: E0126 00:09:16.902597 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 26 00:09:16 crc kubenswrapper[5107]: E0126 00:09:16.906593 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.915291 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.916619 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.916672 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.916686 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.916719 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:16 crc kubenswrapper[5107]: E0126 00:09:16.917264 5107 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.203:6443: connect: connection refused" node="crc" Jan 26 00:09:16 crc kubenswrapper[5107]: E0126 00:09:16.924171 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.924186 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.924575 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.924637 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.924676 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.925216 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.925280 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.925355 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.925350 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.925401 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.925534 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.925479 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.925562 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.925654 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.925704 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.925758 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.925870 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.925919 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.925943 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.925967 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.925993 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.926012 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.926032 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.926028 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.926049 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.926053 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.926066 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.926090 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.926126 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.926491 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.926838 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:16 crc kubenswrapper[5107]: E0126 00:09:16.933735 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:16 crc kubenswrapper[5107]: E0126 00:09:16.994946 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 26 00:09:16 crc kubenswrapper[5107]: I0126 00:09:16.995626 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.203:6443: connect: connection refused Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028048 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028135 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028165 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028242 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028280 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028340 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028307 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028315 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028400 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028425 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028449 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028476 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028502 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028530 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028533 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028600 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028561 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028561 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028610 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028641 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028681 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028698 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028720 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028461 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028793 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028820 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028849 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028872 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028913 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028939 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.028943 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.029043 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: E0126 00:09:17.135233 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.174115 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.183016 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.207966 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: W0126 00:09:17.209318 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-e12e6baf04da885eb58b52808273c06a8979a8c929dbdb6f4c24882946b53db4 WatchSource:0}: Error finding container e12e6baf04da885eb58b52808273c06a8979a8c929dbdb6f4c24882946b53db4: Status 404 returned error can't find the container with id e12e6baf04da885eb58b52808273c06a8979a8c929dbdb6f4c24882946b53db4 Jan 26 00:09:17 crc kubenswrapper[5107]: W0126 00:09:17.213111 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-87c76c1109e4ba015c1101f992621cf12892fa6a8a6f86fc9d34617ee8a44b61 WatchSource:0}: Error finding container 87c76c1109e4ba015c1101f992621cf12892fa6a8a6f86fc9d34617ee8a44b61: Status 404 returned error can't find the container with id 87c76c1109e4ba015c1101f992621cf12892fa6a8a6f86fc9d34617ee8a44b61 Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.219872 5107 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.224666 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: W0126 00:09:17.225237 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-1df4c8f13880dadc38725ac349ba0fc403931a0c4de7732778fd185ef6ff95cb WatchSource:0}: Error finding container 1df4c8f13880dadc38725ac349ba0fc403931a0c4de7732778fd185ef6ff95cb: Status 404 returned error can't find the container with id 1df4c8f13880dadc38725ac349ba0fc403931a0c4de7732778fd185ef6ff95cb Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.234265 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 00:09:17 crc kubenswrapper[5107]: W0126 00:09:17.244928 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-c94ab9d51e119d43250bac5be4652a3c1b28d722b5918bbf9fa18b6747622b88 WatchSource:0}: Error finding container c94ab9d51e119d43250bac5be4652a3c1b28d722b5918bbf9fa18b6747622b88: Status 404 returned error can't find the container with id c94ab9d51e119d43250bac5be4652a3c1b28d722b5918bbf9fa18b6747622b88 Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.318111 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.385441 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.385506 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.385521 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.385548 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:17 crc kubenswrapper[5107]: E0126 00:09:17.386034 5107 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.203:6443: connect: connection refused" node="crc" Jan 26 00:09:17 crc kubenswrapper[5107]: E0126 00:09:17.462908 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" interval="1.6s" Jan 26 00:09:17 crc kubenswrapper[5107]: E0126 00:09:17.619229 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.825012 5107 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 26 00:09:17 crc kubenswrapper[5107]: E0126 00:09:17.826445 5107 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.203:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 26 00:09:17 crc kubenswrapper[5107]: I0126 00:09:17.998866 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.203:6443: connect: connection refused Jan 26 00:09:18 crc kubenswrapper[5107]: I0126 00:09:18.120407 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"7e3955ed5ce9bd2258111e46fc582429ac80ca03bb58ad450d2ddd0a08ff769e"} Jan 26 00:09:18 crc kubenswrapper[5107]: I0126 00:09:18.122477 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8"} Jan 26 00:09:18 crc kubenswrapper[5107]: I0126 00:09:18.122544 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"c94ab9d51e119d43250bac5be4652a3c1b28d722b5918bbf9fa18b6747622b88"} Jan 26 00:09:18 crc kubenswrapper[5107]: I0126 00:09:18.123935 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71"} Jan 26 00:09:18 crc kubenswrapper[5107]: I0126 00:09:18.123961 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"1df4c8f13880dadc38725ac349ba0fc403931a0c4de7732778fd185ef6ff95cb"} Jan 26 00:09:18 crc kubenswrapper[5107]: I0126 00:09:18.124146 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:18 crc kubenswrapper[5107]: I0126 00:09:18.125082 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:18 crc kubenswrapper[5107]: I0126 00:09:18.125117 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:18 crc kubenswrapper[5107]: I0126 00:09:18.125128 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:18 crc kubenswrapper[5107]: E0126 00:09:18.125837 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:18 crc kubenswrapper[5107]: I0126 00:09:18.127120 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"322b4a3e2a376c541682895450ed098e45acabe88d84fda4adbc15c56d32ab5b"} Jan 26 00:09:18 crc kubenswrapper[5107]: I0126 00:09:18.127172 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"87c76c1109e4ba015c1101f992621cf12892fa6a8a6f86fc9d34617ee8a44b61"} Jan 26 00:09:18 crc kubenswrapper[5107]: I0126 00:09:18.128766 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b"} Jan 26 00:09:18 crc kubenswrapper[5107]: I0126 00:09:18.128788 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"e12e6baf04da885eb58b52808273c06a8979a8c929dbdb6f4c24882946b53db4"} Jan 26 00:09:18 crc kubenswrapper[5107]: I0126 00:09:18.234388 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:18 crc kubenswrapper[5107]: I0126 00:09:18.235613 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:18 crc kubenswrapper[5107]: I0126 00:09:18.235658 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:18 crc kubenswrapper[5107]: I0126 00:09:18.235670 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:18 crc kubenswrapper[5107]: I0126 00:09:18.235699 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:18 crc kubenswrapper[5107]: E0126 00:09:18.236872 5107 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.203:6443: connect: connection refused" node="crc" Jan 26 00:09:18 crc kubenswrapper[5107]: E0126 00:09:18.947940 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.003088 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.203:6443: connect: connection refused Jan 26 00:09:19 crc kubenswrapper[5107]: E0126 00:09:19.064375 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" interval="3.2s" Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.182808 5107 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b" exitCode=0 Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.182924 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b"} Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.183056 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.184093 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.184133 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.184149 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:19 crc kubenswrapper[5107]: E0126 00:09:19.184405 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.186510 5107 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8" exitCode=0 Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.186591 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.186626 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8"} Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.187108 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.187139 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.187154 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:19 crc kubenswrapper[5107]: E0126 00:09:19.187350 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.188609 5107 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71" exitCode=0 Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.188680 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71"} Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.188810 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.189416 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.189466 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.189485 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:19 crc kubenswrapper[5107]: E0126 00:09:19.189772 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.191143 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"a32faca2b6b353b711ddefefc6c8849adfa0a7790893f7c1faa5a3f9d703fddf"} Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.193535 5107 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b" exitCode=0 Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.193584 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b"} Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.193657 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.195315 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.195360 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.195370 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:19 crc kubenswrapper[5107]: E0126 00:09:19.195570 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.203052 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.204294 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.204331 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.204345 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:19 crc kubenswrapper[5107]: E0126 00:09:19.234524 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 26 00:09:19 crc kubenswrapper[5107]: E0126 00:09:19.332610 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 26 00:09:19 crc kubenswrapper[5107]: E0126 00:09:19.548123 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.837044 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.838724 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.838764 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.838775 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.838806 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:19 crc kubenswrapper[5107]: E0126 00:09:19.839293 5107 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.203:6443: connect: connection refused" node="crc" Jan 26 00:09:19 crc kubenswrapper[5107]: I0126 00:09:19.995623 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.203:6443: connect: connection refused Jan 26 00:09:20 crc kubenswrapper[5107]: I0126 00:09:20.210840 5107 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41" exitCode=0 Jan 26 00:09:20 crc kubenswrapper[5107]: I0126 00:09:20.210964 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41"} Jan 26 00:09:20 crc kubenswrapper[5107]: I0126 00:09:20.211071 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:20 crc kubenswrapper[5107]: I0126 00:09:20.212286 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:20 crc kubenswrapper[5107]: I0126 00:09:20.212321 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:20 crc kubenswrapper[5107]: I0126 00:09:20.212333 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:20 crc kubenswrapper[5107]: E0126 00:09:20.212541 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:20 crc kubenswrapper[5107]: I0126 00:09:20.213902 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"be5bcbd76c10288ba86ec209af691e631a5c24d4f596b8b2a22be27a2e5b6026"} Jan 26 00:09:20 crc kubenswrapper[5107]: I0126 00:09:20.214065 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:20 crc kubenswrapper[5107]: I0126 00:09:20.214789 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:20 crc kubenswrapper[5107]: I0126 00:09:20.214820 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:20 crc kubenswrapper[5107]: I0126 00:09:20.214831 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:20 crc kubenswrapper[5107]: E0126 00:09:20.215020 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:20 crc kubenswrapper[5107]: E0126 00:09:20.217572 5107 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.203:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e1f55933617fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:16.00566067 +0000 UTC m=+0.923255016,LastTimestamp:2026-01-26 00:09:16.00566067 +0000 UTC m=+0.923255016,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:20 crc kubenswrapper[5107]: I0126 00:09:20.219441 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"1c2645c2d7f91e355504de88c19902bd7091a30b8fb1e6bffe3bd643d9ae87e3"} Jan 26 00:09:20 crc kubenswrapper[5107]: I0126 00:09:20.223507 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"e262e6f9f8205c48c94e191de6b6732c6294e9f794db6f66c90b561ec016e455"} Jan 26 00:09:20 crc kubenswrapper[5107]: I0126 00:09:20.223569 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"9ab1303ede901dbcb4161028c6937d2b8c3d5c9bed4e1b0e53f56f5f2d84ac85"} Jan 26 00:09:20 crc kubenswrapper[5107]: I0126 00:09:20.223745 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:20 crc kubenswrapper[5107]: I0126 00:09:20.224976 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:20 crc kubenswrapper[5107]: I0126 00:09:20.225006 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:20 crc kubenswrapper[5107]: I0126 00:09:20.225016 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:20 crc kubenswrapper[5107]: E0126 00:09:20.225207 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:20 crc kubenswrapper[5107]: I0126 00:09:20.228264 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"47fa690b41b05a971d8e2d25a105b0c873282b4794f352165354120564685e3b"} Jan 26 00:09:20 crc kubenswrapper[5107]: I0126 00:09:20.485823 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:20 crc kubenswrapper[5107]: I0126 00:09:20.995318 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.203:6443: connect: connection refused Jan 26 00:09:21 crc kubenswrapper[5107]: I0126 00:09:21.232963 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"8c4c9a487362af2080d699cfb3c25b37fee4ea7ee71fe4c120513c8a93e345bd"} Jan 26 00:09:21 crc kubenswrapper[5107]: I0126 00:09:21.233159 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"6cd1e8970c9bd97f174884ae8760b3f67982935515109cac7fc2423d03e2cdc9"} Jan 26 00:09:21 crc kubenswrapper[5107]: I0126 00:09:21.233385 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:21 crc kubenswrapper[5107]: I0126 00:09:21.234567 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:21 crc kubenswrapper[5107]: I0126 00:09:21.234690 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:21 crc kubenswrapper[5107]: I0126 00:09:21.234782 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:21 crc kubenswrapper[5107]: E0126 00:09:21.235097 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:21 crc kubenswrapper[5107]: I0126 00:09:21.237711 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"6c1d676a79dd2425942bd62e4d423f98509d8fbdce526ec4174c8f201faab13c"} Jan 26 00:09:21 crc kubenswrapper[5107]: I0126 00:09:21.237837 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"e6a9e0e1088ec6d6c55e9c40410af1e160ce01e045855d38afe83fae0f283ad1"} Jan 26 00:09:21 crc kubenswrapper[5107]: I0126 00:09:21.240143 5107 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e" exitCode=0 Jan 26 00:09:21 crc kubenswrapper[5107]: I0126 00:09:21.240403 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e"} Jan 26 00:09:21 crc kubenswrapper[5107]: I0126 00:09:21.240410 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:21 crc kubenswrapper[5107]: I0126 00:09:21.240767 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:21 crc kubenswrapper[5107]: I0126 00:09:21.241136 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:21 crc kubenswrapper[5107]: I0126 00:09:21.241245 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:21 crc kubenswrapper[5107]: I0126 00:09:21.241457 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:21 crc kubenswrapper[5107]: I0126 00:09:21.241431 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:21 crc kubenswrapper[5107]: I0126 00:09:21.241669 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:21 crc kubenswrapper[5107]: I0126 00:09:21.241703 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:21 crc kubenswrapper[5107]: E0126 00:09:21.241929 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:21 crc kubenswrapper[5107]: E0126 00:09:21.242206 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:21 crc kubenswrapper[5107]: I0126 00:09:21.242547 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:21 crc kubenswrapper[5107]: I0126 00:09:21.243453 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:21 crc kubenswrapper[5107]: I0126 00:09:21.243565 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:21 crc kubenswrapper[5107]: I0126 00:09:21.244264 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:21 crc kubenswrapper[5107]: E0126 00:09:21.244851 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:21 crc kubenswrapper[5107]: I0126 00:09:21.494726 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:21 crc kubenswrapper[5107]: I0126 00:09:21.995961 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.203:6443: connect: connection refused Jan 26 00:09:22 crc kubenswrapper[5107]: I0126 00:09:22.018204 5107 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 26 00:09:22 crc kubenswrapper[5107]: E0126 00:09:22.019828 5107 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.203:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 26 00:09:22 crc kubenswrapper[5107]: I0126 00:09:22.247434 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"d3d24145582663d59a87810736c9cba433c006ed3baf7391cf09c2341c5e6b9a"} Jan 26 00:09:22 crc kubenswrapper[5107]: I0126 00:09:22.251256 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b1aca2d73febffe83b4f184385b1823dcd40098f6bdf6a1b1c46b26f7017dda0"} Jan 26 00:09:22 crc kubenswrapper[5107]: I0126 00:09:22.251284 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"77afb4ec3e1993d3627dfd57b2c724e127e0b709358c469f86fe32abae3a75a7"} Jan 26 00:09:22 crc kubenswrapper[5107]: I0126 00:09:22.251431 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:22 crc kubenswrapper[5107]: I0126 00:09:22.251439 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:22 crc kubenswrapper[5107]: I0126 00:09:22.251512 5107 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 00:09:22 crc kubenswrapper[5107]: I0126 00:09:22.251543 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:22 crc kubenswrapper[5107]: I0126 00:09:22.252499 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:22 crc kubenswrapper[5107]: I0126 00:09:22.252523 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:22 crc kubenswrapper[5107]: I0126 00:09:22.252533 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:22 crc kubenswrapper[5107]: E0126 00:09:22.252828 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:22 crc kubenswrapper[5107]: I0126 00:09:22.252960 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:22 crc kubenswrapper[5107]: I0126 00:09:22.252983 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:22 crc kubenswrapper[5107]: I0126 00:09:22.252994 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:22 crc kubenswrapper[5107]: E0126 00:09:22.253324 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:22 crc kubenswrapper[5107]: I0126 00:09:22.253541 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:22 crc kubenswrapper[5107]: I0126 00:09:22.253574 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:22 crc kubenswrapper[5107]: I0126 00:09:22.253602 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:22 crc kubenswrapper[5107]: E0126 00:09:22.253827 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:22 crc kubenswrapper[5107]: E0126 00:09:22.270448 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" interval="6.4s" Jan 26 00:09:22 crc kubenswrapper[5107]: I0126 00:09:22.365297 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:22 crc kubenswrapper[5107]: I0126 00:09:22.376326 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:23 crc kubenswrapper[5107]: I0126 00:09:23.039612 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:23 crc kubenswrapper[5107]: I0126 00:09:23.089010 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:23 crc kubenswrapper[5107]: I0126 00:09:23.089067 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:23 crc kubenswrapper[5107]: I0126 00:09:23.089078 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:23 crc kubenswrapper[5107]: I0126 00:09:23.089108 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:23 crc kubenswrapper[5107]: I0126 00:09:23.216988 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:23 crc kubenswrapper[5107]: I0126 00:09:23.278709 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"976d360799dd0382ad10776370c3db39c364353d2a4c9ffdd339503160e251db"} Jan 26 00:09:23 crc kubenswrapper[5107]: I0126 00:09:23.278822 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:23 crc kubenswrapper[5107]: I0126 00:09:23.278822 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"2986b22cb8ac794f2297b2bb06e60e4f85638acb9c56a9ccf8a86e5d42ae8251"} Jan 26 00:09:23 crc kubenswrapper[5107]: I0126 00:09:23.278943 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"80635fc424a05f12a9bb60d0ceb42d4a25d7bbc065e69e32316354bfa3c1c21f"} Jan 26 00:09:23 crc kubenswrapper[5107]: I0126 00:09:23.278740 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:23 crc kubenswrapper[5107]: I0126 00:09:23.279594 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:23 crc kubenswrapper[5107]: I0126 00:09:23.279618 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:23 crc kubenswrapper[5107]: I0126 00:09:23.279628 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:23 crc kubenswrapper[5107]: I0126 00:09:23.279719 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:23 crc kubenswrapper[5107]: I0126 00:09:23.279756 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:23 crc kubenswrapper[5107]: I0126 00:09:23.279768 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:23 crc kubenswrapper[5107]: E0126 00:09:23.279939 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:23 crc kubenswrapper[5107]: E0126 00:09:23.280382 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:24 crc kubenswrapper[5107]: I0126 00:09:24.178674 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:24 crc kubenswrapper[5107]: I0126 00:09:24.286123 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"1d272e8f9f86eb13c31a8613165562354adc102c6c7674464a48f4c72fc4a3b7"} Jan 26 00:09:24 crc kubenswrapper[5107]: I0126 00:09:24.286309 5107 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 00:09:24 crc kubenswrapper[5107]: I0126 00:09:24.286330 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:24 crc kubenswrapper[5107]: I0126 00:09:24.286370 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:24 crc kubenswrapper[5107]: I0126 00:09:24.286317 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:24 crc kubenswrapper[5107]: I0126 00:09:24.287055 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:24 crc kubenswrapper[5107]: I0126 00:09:24.287082 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:24 crc kubenswrapper[5107]: I0126 00:09:24.287132 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:24 crc kubenswrapper[5107]: I0126 00:09:24.287142 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:24 crc kubenswrapper[5107]: I0126 00:09:24.287089 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:24 crc kubenswrapper[5107]: I0126 00:09:24.287187 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:24 crc kubenswrapper[5107]: I0126 00:09:24.287195 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:24 crc kubenswrapper[5107]: I0126 00:09:24.287158 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:24 crc kubenswrapper[5107]: I0126 00:09:24.287200 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:24 crc kubenswrapper[5107]: E0126 00:09:24.287820 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:24 crc kubenswrapper[5107]: E0126 00:09:24.288080 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:24 crc kubenswrapper[5107]: E0126 00:09:24.288169 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:24 crc kubenswrapper[5107]: I0126 00:09:24.495118 5107 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Jan 26 00:09:24 crc kubenswrapper[5107]: I0126 00:09:24.495235 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Jan 26 00:09:24 crc kubenswrapper[5107]: I0126 00:09:24.609283 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 26 00:09:25 crc kubenswrapper[5107]: I0126 00:09:25.129386 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:25 crc kubenswrapper[5107]: I0126 00:09:25.129590 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:25 crc kubenswrapper[5107]: I0126 00:09:25.130476 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:25 crc kubenswrapper[5107]: I0126 00:09:25.130517 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:25 crc kubenswrapper[5107]: I0126 00:09:25.130530 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:25 crc kubenswrapper[5107]: E0126 00:09:25.130846 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:25 crc kubenswrapper[5107]: I0126 00:09:25.288363 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:25 crc kubenswrapper[5107]: I0126 00:09:25.288449 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:25 crc kubenswrapper[5107]: I0126 00:09:25.288993 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:25 crc kubenswrapper[5107]: I0126 00:09:25.289018 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:25 crc kubenswrapper[5107]: I0126 00:09:25.289028 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:25 crc kubenswrapper[5107]: I0126 00:09:25.289153 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:25 crc kubenswrapper[5107]: I0126 00:09:25.289199 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:25 crc kubenswrapper[5107]: I0126 00:09:25.289212 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:25 crc kubenswrapper[5107]: E0126 00:09:25.289355 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:25 crc kubenswrapper[5107]: E0126 00:09:25.289663 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:26 crc kubenswrapper[5107]: I0126 00:09:26.290230 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:26 crc kubenswrapper[5107]: I0126 00:09:26.290852 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:26 crc kubenswrapper[5107]: I0126 00:09:26.290920 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:26 crc kubenswrapper[5107]: I0126 00:09:26.290931 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:26 crc kubenswrapper[5107]: E0126 00:09:26.291397 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:26 crc kubenswrapper[5107]: E0126 00:09:26.616233 5107 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:09:26 crc kubenswrapper[5107]: I0126 00:09:26.678046 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:26 crc kubenswrapper[5107]: I0126 00:09:26.678820 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:26 crc kubenswrapper[5107]: I0126 00:09:26.695943 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:26 crc kubenswrapper[5107]: I0126 00:09:26.696016 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:26 crc kubenswrapper[5107]: I0126 00:09:26.696034 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:26 crc kubenswrapper[5107]: E0126 00:09:26.696522 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:26 crc kubenswrapper[5107]: I0126 00:09:26.871556 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:26 crc kubenswrapper[5107]: I0126 00:09:26.871831 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:26 crc kubenswrapper[5107]: I0126 00:09:26.872848 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:26 crc kubenswrapper[5107]: I0126 00:09:26.872932 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:26 crc kubenswrapper[5107]: I0126 00:09:26.872947 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:26 crc kubenswrapper[5107]: E0126 00:09:26.873371 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:27 crc kubenswrapper[5107]: I0126 00:09:27.157308 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Jan 26 00:09:27 crc kubenswrapper[5107]: I0126 00:09:27.292286 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:27 crc kubenswrapper[5107]: I0126 00:09:27.293295 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:27 crc kubenswrapper[5107]: I0126 00:09:27.293338 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:27 crc kubenswrapper[5107]: I0126 00:09:27.293351 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:27 crc kubenswrapper[5107]: E0126 00:09:27.293776 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:30 crc kubenswrapper[5107]: I0126 00:09:30.182593 5107 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 26 00:09:32 crc kubenswrapper[5107]: I0126 00:09:32.712831 5107 trace.go:236] Trace[1287404014]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 00:09:22.711) (total time: 10001ms): Jan 26 00:09:32 crc kubenswrapper[5107]: Trace[1287404014]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:09:32.712) Jan 26 00:09:32 crc kubenswrapper[5107]: Trace[1287404014]: [10.001187465s] [10.001187465s] END Jan 26 00:09:32 crc kubenswrapper[5107]: E0126 00:09:32.712918 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 26 00:09:32 crc kubenswrapper[5107]: I0126 00:09:32.996643 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 26 00:09:33 crc kubenswrapper[5107]: E0126 00:09:33.091205 5107 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 26 00:09:33 crc kubenswrapper[5107]: I0126 00:09:33.808173 5107 trace.go:236] Trace[1878565149]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 00:09:23.805) (total time: 10002ms): Jan 26 00:09:33 crc kubenswrapper[5107]: Trace[1878565149]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (00:09:33.808) Jan 26 00:09:33 crc kubenswrapper[5107]: Trace[1878565149]: [10.002489962s] [10.002489962s] END Jan 26 00:09:33 crc kubenswrapper[5107]: E0126 00:09:33.808231 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 26 00:09:34 crc kubenswrapper[5107]: I0126 00:09:34.027313 5107 trace.go:236] Trace[1233875740]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 00:09:24.026) (total time: 10001ms): Jan 26 00:09:34 crc kubenswrapper[5107]: Trace[1233875740]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:09:34.027) Jan 26 00:09:34 crc kubenswrapper[5107]: Trace[1233875740]: [10.001121295s] [10.001121295s] END Jan 26 00:09:34 crc kubenswrapper[5107]: E0126 00:09:34.027355 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 26 00:09:34 crc kubenswrapper[5107]: I0126 00:09:34.179136 5107 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 00:09:34 crc kubenswrapper[5107]: I0126 00:09:34.179298 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 00:09:34 crc kubenswrapper[5107]: I0126 00:09:34.457606 5107 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 00:09:34 crc kubenswrapper[5107]: I0126 00:09:34.457682 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 00:09:34 crc kubenswrapper[5107]: I0126 00:09:34.495550 5107 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 00:09:34 crc kubenswrapper[5107]: I0126 00:09:34.495676 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 00:09:34 crc kubenswrapper[5107]: I0126 00:09:34.728051 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 26 00:09:34 crc kubenswrapper[5107]: I0126 00:09:34.728568 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:34 crc kubenswrapper[5107]: I0126 00:09:34.730851 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:34 crc kubenswrapper[5107]: I0126 00:09:34.730963 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:34 crc kubenswrapper[5107]: I0126 00:09:34.730984 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:34 crc kubenswrapper[5107]: E0126 00:09:34.731831 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:34 crc kubenswrapper[5107]: I0126 00:09:34.748382 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 26 00:09:35 crc kubenswrapper[5107]: I0126 00:09:35.328102 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:35 crc kubenswrapper[5107]: I0126 00:09:35.329316 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:35 crc kubenswrapper[5107]: I0126 00:09:35.329376 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:35 crc kubenswrapper[5107]: I0126 00:09:35.329386 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:35 crc kubenswrapper[5107]: E0126 00:09:35.329993 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:36 crc kubenswrapper[5107]: E0126 00:09:36.617133 5107 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:09:36 crc kubenswrapper[5107]: I0126 00:09:36.878823 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:36 crc kubenswrapper[5107]: I0126 00:09:36.879194 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:36 crc kubenswrapper[5107]: I0126 00:09:36.880213 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:36 crc kubenswrapper[5107]: I0126 00:09:36.880288 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:36 crc kubenswrapper[5107]: I0126 00:09:36.880312 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:36 crc kubenswrapper[5107]: E0126 00:09:36.881286 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:38 crc kubenswrapper[5107]: E0126 00:09:38.672312 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="7s" Jan 26 00:09:39 crc kubenswrapper[5107]: I0126 00:09:39.186593 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:39 crc kubenswrapper[5107]: I0126 00:09:39.187077 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:39 crc kubenswrapper[5107]: I0126 00:09:39.188488 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:39 crc kubenswrapper[5107]: I0126 00:09:39.188591 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:39 crc kubenswrapper[5107]: I0126 00:09:39.188653 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.189577 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:39 crc kubenswrapper[5107]: I0126 00:09:39.194944 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:39 crc kubenswrapper[5107]: I0126 00:09:39.337275 5107 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 00:09:39 crc kubenswrapper[5107]: I0126 00:09:39.337343 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:39 crc kubenswrapper[5107]: I0126 00:09:39.338031 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:39 crc kubenswrapper[5107]: I0126 00:09:39.338066 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:39 crc kubenswrapper[5107]: I0126 00:09:39.338076 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.338489 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.456045 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f55933617fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:16.00566067 +0000 UTC m=+0.923255016,LastTimestamp:2026-01-26 00:09:16.00566067 +0000 UTC m=+0.923255016,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: I0126 00:09:39.456976 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:39 crc kubenswrapper[5107]: I0126 00:09:39.457040 5107 trace.go:236] Trace[676579969]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 00:09:25.240) (total time: 14216ms): Jan 26 00:09:39 crc kubenswrapper[5107]: Trace[676579969]: ---"Objects listed" error:services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope 14216ms (00:09:39.456) Jan 26 00:09:39 crc kubenswrapper[5107]: Trace[676579969]: [14.216149304s] [14.216149304s] END Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.457242 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.461405 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f559ab7ca50 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:16.131600976 +0000 UTC m=+1.049195322,LastTimestamp:2026-01-26 00:09:16.131600976 +0000 UTC m=+1.049195322,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.469092 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f559ab847e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:16.131633127 +0000 UTC m=+1.049227473,LastTimestamp:2026-01-26 00:09:16.131633127 +0000 UTC m=+1.049227473,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.474828 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f559ab88050 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:16.131647568 +0000 UTC m=+1.049241914,LastTimestamp:2026-01-26 00:09:16.131647568 +0000 UTC m=+1.049241914,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.479939 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f55b785f96e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:16.614875502 +0000 UTC m=+1.532469848,LastTimestamp:2026-01-26 00:09:16.614875502 +0000 UTC m=+1.532469848,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.485096 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f559ab7ca50\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f559ab7ca50 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:16.131600976 +0000 UTC m=+1.049195322,LastTimestamp:2026-01-26 00:09:16.713537813 +0000 UTC m=+1.631132169,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: I0126 00:09:39.513015 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:39 crc kubenswrapper[5107]: I0126 00:09:39.555219 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:39 crc kubenswrapper[5107]: I0126 00:09:39.555267 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:39 crc kubenswrapper[5107]: I0126 00:09:39.555282 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:39 crc kubenswrapper[5107]: I0126 00:09:39.555312 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:39 crc kubenswrapper[5107]: I0126 00:09:39.562997 5107 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.564551 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f559ab847e7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f559ab847e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:16.131633127 +0000 UTC m=+1.049227473,LastTimestamp:2026-01-26 00:09:16.713580644 +0000 UTC m=+1.631175000,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.569147 5107 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.569859 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f559ab88050\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f559ab88050 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:16.131647568 +0000 UTC m=+1.049241914,LastTimestamp:2026-01-26 00:09:16.713596445 +0000 UTC m=+1.631190791,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.572477 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f559ab7ca50\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f559ab7ca50 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:16.131600976 +0000 UTC m=+1.049195322,LastTimestamp:2026-01-26 00:09:16.816003961 +0000 UTC m=+1.733598297,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.577555 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f559ab847e7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f559ab847e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:16.131633127 +0000 UTC m=+1.049227473,LastTimestamp:2026-01-26 00:09:16.816043122 +0000 UTC m=+1.733637468,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.581490 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f559ab88050\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f559ab88050 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:16.131647568 +0000 UTC m=+1.049241914,LastTimestamp:2026-01-26 00:09:16.816066733 +0000 UTC m=+1.733661079,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.585079 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f559ab7ca50\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f559ab7ca50 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:16.131600976 +0000 UTC m=+1.049195322,LastTimestamp:2026-01-26 00:09:16.818064424 +0000 UTC m=+1.735658770,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.586774 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f559ab847e7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f559ab847e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:16.131633127 +0000 UTC m=+1.049227473,LastTimestamp:2026-01-26 00:09:16.818102245 +0000 UTC m=+1.735696591,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: I0126 00:09:39.591960 5107 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:45994->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 26 00:09:39 crc kubenswrapper[5107]: I0126 00:09:39.592040 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:45994->192.168.126.11:17697: read: connection reset by peer" Jan 26 00:09:39 crc kubenswrapper[5107]: I0126 00:09:39.592341 5107 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 00:09:39 crc kubenswrapper[5107]: I0126 00:09:39.592420 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.592878 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f559ab88050\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f559ab88050 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:16.131647568 +0000 UTC m=+1.049241914,LastTimestamp:2026-01-26 00:09:16.818115905 +0000 UTC m=+1.735710251,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.597375 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f559ab7ca50\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f559ab7ca50 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:16.131600976 +0000 UTC m=+1.049195322,LastTimestamp:2026-01-26 00:09:16.818579087 +0000 UTC m=+1.736173433,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.603222 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f559ab847e7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f559ab847e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:16.131633127 +0000 UTC m=+1.049227473,LastTimestamp:2026-01-26 00:09:16.818605938 +0000 UTC m=+1.736200284,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.608243 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f559ab88050\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f559ab88050 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:16.131647568 +0000 UTC m=+1.049241914,LastTimestamp:2026-01-26 00:09:16.818616108 +0000 UTC m=+1.736210454,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.615256 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f559ab7ca50\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f559ab7ca50 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:16.131600976 +0000 UTC m=+1.049195322,LastTimestamp:2026-01-26 00:09:16.81987419 +0000 UTC m=+1.737468536,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.621414 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f559ab847e7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f559ab847e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:16.131633127 +0000 UTC m=+1.049227473,LastTimestamp:2026-01-26 00:09:16.819919731 +0000 UTC m=+1.737514077,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.626941 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f559ab88050\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f559ab88050 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:16.131647568 +0000 UTC m=+1.049241914,LastTimestamp:2026-01-26 00:09:16.819935921 +0000 UTC m=+1.737530267,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.631229 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f559ab7ca50\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f559ab7ca50 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:16.131600976 +0000 UTC m=+1.049195322,LastTimestamp:2026-01-26 00:09:16.820397113 +0000 UTC m=+1.737991459,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.639087 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f559ab847e7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f559ab847e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:16.131633127 +0000 UTC m=+1.049227473,LastTimestamp:2026-01-26 00:09:16.820413724 +0000 UTC m=+1.738008070,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.645956 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f559ab88050\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f559ab88050 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:16.131647568 +0000 UTC m=+1.049241914,LastTimestamp:2026-01-26 00:09:16.820425834 +0000 UTC m=+1.738020180,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.651733 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f559ab7ca50\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f559ab7ca50 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:16.131600976 +0000 UTC m=+1.049195322,LastTimestamp:2026-01-26 00:09:16.821795209 +0000 UTC m=+1.739389555,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.656834 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f559ab847e7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f559ab847e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:16.131633127 +0000 UTC m=+1.049227473,LastTimestamp:2026-01-26 00:09:16.821817839 +0000 UTC m=+1.739412185,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.663133 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f55db9dc416 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:17.220414486 +0000 UTC m=+2.138008832,LastTimestamp:2026-01-26 00:09:17.220414486 +0000 UTC m=+2.138008832,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.670695 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f55db9f8a8d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:17.220530829 +0000 UTC m=+2.138125185,LastTimestamp:2026-01-26 00:09:17.220530829 +0000 UTC m=+2.138125185,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.676556 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f55dc7f3460 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:17.235188832 +0000 UTC m=+2.152783178,LastTimestamp:2026-01-26 00:09:17.235188832 +0000 UTC m=+2.152783178,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.682702 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e1f55dd3c975e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:17.247600478 +0000 UTC m=+2.165194824,LastTimestamp:2026-01-26 00:09:17.247600478 +0000 UTC m=+2.165194824,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.688224 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f55e590b08b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:17.387329675 +0000 UTC m=+2.304924021,LastTimestamp:2026-01-26 00:09:17.387329675 +0000 UTC m=+2.304924021,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.692785 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f560f4bafc7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:18.087450567 +0000 UTC m=+3.005044903,LastTimestamp:2026-01-26 00:09:18.087450567 +0000 UTC m=+3.005044903,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.698000 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f560f4d1283 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:18.087541379 +0000 UTC m=+3.005135725,LastTimestamp:2026-01-26 00:09:18.087541379 +0000 UTC m=+3.005135725,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.702225 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e1f560f4f5e89 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:18.087691913 +0000 UTC m=+3.005286259,LastTimestamp:2026-01-26 00:09:18.087691913 +0000 UTC m=+3.005286259,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.706985 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f560f4f7af9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:18.087699193 +0000 UTC m=+3.005293559,LastTimestamp:2026-01-26 00:09:18.087699193 +0000 UTC m=+3.005293559,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.711863 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f5610477bef openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:18.103952367 +0000 UTC m=+3.021546733,LastTimestamp:2026-01-26 00:09:18.103952367 +0000 UTC m=+3.021546733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.718155 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e1f5619ae2dd2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:18.261677522 +0000 UTC m=+3.179271868,LastTimestamp:2026-01-26 00:09:18.261677522 +0000 UTC m=+3.179271868,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.724017 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f561a16b8e2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:18.268528866 +0000 UTC m=+3.186123212,LastTimestamp:2026-01-26 00:09:18.268528866 +0000 UTC m=+3.186123212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.729113 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f561a1df7e5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:18.269003749 +0000 UTC m=+3.186598095,LastTimestamp:2026-01-26 00:09:18.269003749 +0000 UTC m=+3.186598095,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.735338 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f561a1ffcc8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:18.269136072 +0000 UTC m=+3.186730418,LastTimestamp:2026-01-26 00:09:18.269136072 +0000 UTC m=+3.186730418,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.741652 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f561a288284 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:18.269694596 +0000 UTC m=+3.187288942,LastTimestamp:2026-01-26 00:09:18.269694596 +0000 UTC m=+3.187288942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.749426 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f561b7163d5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:18.291248085 +0000 UTC m=+3.208842431,LastTimestamp:2026-01-26 00:09:18.291248085 +0000 UTC m=+3.208842431,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.756807 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f5633a388e5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:18.697187557 +0000 UTC m=+3.614781893,LastTimestamp:2026-01-26 00:09:18.697187557 +0000 UTC m=+3.614781893,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.766111 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f563495e51f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:18.713070879 +0000 UTC m=+3.630665225,LastTimestamp:2026-01-26 00:09:18.713070879 +0000 UTC m=+3.630665225,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.770605 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f5634ac49a3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:18.714538403 +0000 UTC m=+3.632132749,LastTimestamp:2026-01-26 00:09:18.714538403 +0000 UTC m=+3.632132749,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.775847 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f5650d1ee5b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:19.186767451 +0000 UTC m=+4.104361797,LastTimestamp:2026-01-26 00:09:19.186767451 +0000 UTC m=+4.104361797,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.781561 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e1f565100da64 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:19.189842532 +0000 UTC m=+4.107436878,LastTimestamp:2026-01-26 00:09:19.189842532 +0000 UTC m=+4.107436878,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.787317 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f5651121ed1 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:19.190974161 +0000 UTC m=+4.108568507,LastTimestamp:2026-01-26 00:09:19.190974161 +0000 UTC m=+4.108568507,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.793808 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5651c6bfe3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:19.202811875 +0000 UTC m=+4.120406221,LastTimestamp:2026-01-26 00:09:19.202811875 +0000 UTC m=+4.120406221,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.799671 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f5665392c68 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:19.529077864 +0000 UTC m=+4.446672210,LastTimestamp:2026-01-26 00:09:19.529077864 +0000 UTC m=+4.446672210,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.805535 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f5666943de0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:19.551823328 +0000 UTC m=+4.469417674,LastTimestamp:2026-01-26 00:09:19.551823328 +0000 UTC m=+4.469417674,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.810707 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f5666ac1cb7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:19.553387703 +0000 UTC m=+4.470982059,LastTimestamp:2026-01-26 00:09:19.553387703 +0000 UTC m=+4.470982059,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.819923 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f566eac7c62 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:19.687629922 +0000 UTC m=+4.605224268,LastTimestamp:2026-01-26 00:09:19.687629922 +0000 UTC m=+4.605224268,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.825910 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f567032cf5b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:19.713210203 +0000 UTC m=+4.630804549,LastTimestamp:2026-01-26 00:09:19.713210203 +0000 UTC m=+4.630804549,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.831662 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e1f5673efa643 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:19.775917635 +0000 UTC m=+4.693511981,LastTimestamp:2026-01-26 00:09:19.775917635 +0000 UTC m=+4.693511981,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.837872 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f56742fb368 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:19.780115304 +0000 UTC m=+4.697709650,LastTimestamp:2026-01-26 00:09:19.780115304 +0000 UTC m=+4.697709650,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.845901 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f567896319c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:19.853941148 +0000 UTC m=+4.771535494,LastTimestamp:2026-01-26 00:09:19.853941148 +0000 UTC m=+4.771535494,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.850284 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e1f5678989ab1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:19.854099121 +0000 UTC m=+4.771693467,LastTimestamp:2026-01-26 00:09:19.854099121 +0000 UTC m=+4.771693467,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.857449 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f5678ad9702 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:19.855474434 +0000 UTC m=+4.773068780,LastTimestamp:2026-01-26 00:09:19.855474434 +0000 UTC m=+4.773068780,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.863429 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f568758141a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:20.101528602 +0000 UTC m=+5.019122948,LastTimestamp:2026-01-26 00:09:20.101528602 +0000 UTC m=+5.019122948,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.869616 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f568856368e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:20.118183566 +0000 UTC m=+5.035777922,LastTimestamp:2026-01-26 00:09:20.118183566 +0000 UTC m=+5.035777922,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.876909 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f56887b7494 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:20.120624276 +0000 UTC m=+5.038218632,LastTimestamp:2026-01-26 00:09:20.120624276 +0000 UTC m=+5.038218632,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.881999 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f5688cf73f1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:20.126129137 +0000 UTC m=+5.043723493,LastTimestamp:2026-01-26 00:09:20.126129137 +0000 UTC m=+5.043723493,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.886898 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f56898fcdba openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:20.138735034 +0000 UTC m=+5.056329380,LastTimestamp:2026-01-26 00:09:20.138735034 +0000 UTC m=+5.056329380,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.892237 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f568e05253f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:20.213534015 +0000 UTC m=+5.131128361,LastTimestamp:2026-01-26 00:09:20.213534015 +0000 UTC m=+5.131128361,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.897541 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f569780e36c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:20.372638572 +0000 UTC m=+5.290232918,LastTimestamp:2026-01-26 00:09:20.372638572 +0000 UTC m=+5.290232918,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.902471 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f569db1f4e1 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:20.476517601 +0000 UTC m=+5.394111947,LastTimestamp:2026-01-26 00:09:20.476517601 +0000 UTC m=+5.394111947,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.907606 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f569e4e5998 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:20.486767 +0000 UTC m=+5.404361356,LastTimestamp:2026-01-26 00:09:20.486767 +0000 UTC m=+5.404361356,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.912218 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f56af5aed24 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:20.772803876 +0000 UTC m=+5.690398222,LastTimestamp:2026-01-26 00:09:20.772803876 +0000 UTC m=+5.690398222,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.916238 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f56afad1049 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:20.778186825 +0000 UTC m=+5.695781171,LastTimestamp:2026-01-26 00:09:20.778186825 +0000 UTC m=+5.695781171,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.920021 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f56b37d24c7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:20.842155207 +0000 UTC m=+5.759749553,LastTimestamp:2026-01-26 00:09:20.842155207 +0000 UTC m=+5.759749553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.924768 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f56b388218f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:20.842875279 +0000 UTC m=+5.760469625,LastTimestamp:2026-01-26 00:09:20.842875279 +0000 UTC m=+5.760469625,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.930720 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f56b396bc05 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:20.843832325 +0000 UTC m=+5.761426681,LastTimestamp:2026-01-26 00:09:20.843832325 +0000 UTC m=+5.761426681,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.936113 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f56be344f1f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:21.021931295 +0000 UTC m=+5.939525641,LastTimestamp:2026-01-26 00:09:21.021931295 +0000 UTC m=+5.939525641,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.940694 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f56be35ce23 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:21.022029347 +0000 UTC m=+5.939623693,LastTimestamp:2026-01-26 00:09:21.022029347 +0000 UTC m=+5.939623693,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.946805 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f56bef50aeb openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:21.034562283 +0000 UTC m=+5.952156639,LastTimestamp:2026-01-26 00:09:21.034562283 +0000 UTC m=+5.952156639,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.952130 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f56bf00a91b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:21.035323675 +0000 UTC m=+5.952918031,LastTimestamp:2026-01-26 00:09:21.035323675 +0000 UTC m=+5.952918031,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.956998 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f56bf10b007 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:21.036374023 +0000 UTC m=+5.953968369,LastTimestamp:2026-01-26 00:09:21.036374023 +0000 UTC m=+5.953968369,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.962670 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f56cb659578 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:21.243264376 +0000 UTC m=+6.160858722,LastTimestamp:2026-01-26 00:09:21.243264376 +0000 UTC m=+6.160858722,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.966951 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f56cc1c8a6a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:21.255254634 +0000 UTC m=+6.172848990,LastTimestamp:2026-01-26 00:09:21.255254634 +0000 UTC m=+6.172848990,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.972726 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f56d24b3522 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:21.35897629 +0000 UTC m=+6.276570626,LastTimestamp:2026-01-26 00:09:21.35897629 +0000 UTC m=+6.276570626,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.977044 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f56d27d0326 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:21.362240294 +0000 UTC m=+6.279834640,LastTimestamp:2026-01-26 00:09:21.362240294 +0000 UTC m=+6.279834640,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.981696 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f56f1e3d76a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:21.889073002 +0000 UTC m=+6.806667348,LastTimestamp:2026-01-26 00:09:21.889073002 +0000 UTC m=+6.806667348,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.987079 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f56f28138e8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:21.899387112 +0000 UTC m=+6.816981458,LastTimestamp:2026-01-26 00:09:21.899387112 +0000 UTC m=+6.816981458,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.993071 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f56f295234f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:21.900692303 +0000 UTC m=+6.818286639,LastTimestamp:2026-01-26 00:09:21.900692303 +0000 UTC m=+6.818286639,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:39 crc kubenswrapper[5107]: I0126 00:09:39.998253 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:39 crc kubenswrapper[5107]: E0126 00:09:39.998254 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f56f2c766bf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:21.903986367 +0000 UTC m=+6.821580713,LastTimestamp:2026-01-26 00:09:21.903986367 +0000 UTC m=+6.821580713,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:40 crc kubenswrapper[5107]: E0126 00:09:40.001852 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f56f4caeacd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:21.937771213 +0000 UTC m=+6.855365559,LastTimestamp:2026-01-26 00:09:21.937771213 +0000 UTC m=+6.855365559,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:40 crc kubenswrapper[5107]: E0126 00:09:40.006399 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f570ab2aada openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:22.30528073 +0000 UTC m=+7.222875076,LastTimestamp:2026-01-26 00:09:22.30528073 +0000 UTC m=+7.222875076,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:40 crc kubenswrapper[5107]: E0126 00:09:40.010454 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f570bd1794f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:22.324076879 +0000 UTC m=+7.241671225,LastTimestamp:2026-01-26 00:09:22.324076879 +0000 UTC m=+7.241671225,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:40 crc kubenswrapper[5107]: E0126 00:09:40.015177 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f570be84f08 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:22.325573384 +0000 UTC m=+7.243167730,LastTimestamp:2026-01-26 00:09:22.325573384 +0000 UTC m=+7.243167730,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:40 crc kubenswrapper[5107]: E0126 00:09:40.018849 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f5720ea2243 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:22.678014531 +0000 UTC m=+7.595608877,LastTimestamp:2026-01-26 00:09:22.678014531 +0000 UTC m=+7.595608877,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:40 crc kubenswrapper[5107]: E0126 00:09:40.022783 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f5721abb056 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:22.69069935 +0000 UTC m=+7.608293696,LastTimestamp:2026-01-26 00:09:22.69069935 +0000 UTC m=+7.608293696,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:40 crc kubenswrapper[5107]: E0126 00:09:40.027398 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f5721bc9e28 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:22.691808808 +0000 UTC m=+7.609403154,LastTimestamp:2026-01-26 00:09:22.691808808 +0000 UTC m=+7.609403154,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:40 crc kubenswrapper[5107]: E0126 00:09:40.032380 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f573abbca8c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:23.111185036 +0000 UTC m=+8.028779372,LastTimestamp:2026-01-26 00:09:23.111185036 +0000 UTC m=+8.028779372,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:40 crc kubenswrapper[5107]: E0126 00:09:40.036559 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f573b809b78 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:23.124083576 +0000 UTC m=+8.041677922,LastTimestamp:2026-01-26 00:09:23.124083576 +0000 UTC m=+8.041677922,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:40 crc kubenswrapper[5107]: E0126 00:09:40.042064 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f573b92cf53 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:23.125276499 +0000 UTC m=+8.042870845,LastTimestamp:2026-01-26 00:09:23.125276499 +0000 UTC m=+8.042870845,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:40 crc kubenswrapper[5107]: E0126 00:09:40.048459 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f57498d4de6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:23.35979671 +0000 UTC m=+8.277391056,LastTimestamp:2026-01-26 00:09:23.35979671 +0000 UTC m=+8.277391056,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:40 crc kubenswrapper[5107]: E0126 00:09:40.053582 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f574d591e2e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:23.423485486 +0000 UTC m=+8.341079832,LastTimestamp:2026-01-26 00:09:23.423485486 +0000 UTC m=+8.341079832,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:40 crc kubenswrapper[5107]: E0126 00:09:40.059770 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 26 00:09:40 crc kubenswrapper[5107]: &Event{ObjectMeta:{kube-controller-manager-crc.188e1f578d3a32e3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Jan 26 00:09:40 crc kubenswrapper[5107]: body: Jan 26 00:09:40 crc kubenswrapper[5107]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:24.495200995 +0000 UTC m=+9.412795351,LastTimestamp:2026-01-26 00:09:24.495200995 +0000 UTC m=+9.412795351,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 26 00:09:40 crc kubenswrapper[5107]: > Jan 26 00:09:40 crc kubenswrapper[5107]: E0126 00:09:40.064814 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f578d3c329e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:24.495331998 +0000 UTC m=+9.412926364,LastTimestamp:2026-01-26 00:09:24.495331998 +0000 UTC m=+9.412926364,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:40 crc kubenswrapper[5107]: E0126 00:09:40.069903 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 26 00:09:40 crc kubenswrapper[5107]: &Event{ObjectMeta:{kube-apiserver-crc.188e1f59ce70fab6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:6443/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 26 00:09:40 crc kubenswrapper[5107]: body: Jan 26 00:09:40 crc kubenswrapper[5107]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:34.179244726 +0000 UTC m=+19.096839092,LastTimestamp:2026-01-26 00:09:34.179244726 +0000 UTC m=+19.096839092,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 26 00:09:40 crc kubenswrapper[5107]: > Jan 26 00:09:40 crc kubenswrapper[5107]: E0126 00:09:40.074016 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f59ce72f874 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:34.17937522 +0000 UTC m=+19.096969566,LastTimestamp:2026-01-26 00:09:34.17937522 +0000 UTC m=+19.096969566,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:40 crc kubenswrapper[5107]: E0126 00:09:40.078432 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 26 00:09:40 crc kubenswrapper[5107]: &Event{ObjectMeta:{kube-apiserver-crc.188e1f59df092899 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 26 00:09:40 crc kubenswrapper[5107]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 00:09:40 crc kubenswrapper[5107]: Jan 26 00:09:40 crc kubenswrapper[5107]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:34.457653401 +0000 UTC m=+19.375247747,LastTimestamp:2026-01-26 00:09:34.457653401 +0000 UTC m=+19.375247747,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 26 00:09:40 crc kubenswrapper[5107]: > Jan 26 00:09:40 crc kubenswrapper[5107]: E0126 00:09:40.082680 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f59df09f07c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:34.457704572 +0000 UTC m=+19.375298918,LastTimestamp:2026-01-26 00:09:34.457704572 +0000 UTC m=+19.375298918,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:40 crc kubenswrapper[5107]: E0126 00:09:40.086829 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 26 00:09:40 crc kubenswrapper[5107]: &Event{ObjectMeta:{kube-controller-manager-crc.188e1f59e14cd316 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 26 00:09:40 crc kubenswrapper[5107]: body: Jan 26 00:09:40 crc kubenswrapper[5107]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:34.49564239 +0000 UTC m=+19.413236736,LastTimestamp:2026-01-26 00:09:34.49564239 +0000 UTC m=+19.413236736,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 26 00:09:40 crc kubenswrapper[5107]: > Jan 26 00:09:40 crc kubenswrapper[5107]: E0126 00:09:40.091323 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f59e14db92c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:34.495701292 +0000 UTC m=+19.413295658,LastTimestamp:2026-01-26 00:09:34.495701292 +0000 UTC m=+19.413295658,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:40 crc kubenswrapper[5107]: E0126 00:09:40.096835 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 26 00:09:40 crc kubenswrapper[5107]: &Event{ObjectMeta:{kube-apiserver-crc.188e1f5b11112b27 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:45994->192.168.126.11:17697: read: connection reset by peer Jan 26 00:09:40 crc kubenswrapper[5107]: body: Jan 26 00:09:40 crc kubenswrapper[5107]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:39.592006439 +0000 UTC m=+24.509600785,LastTimestamp:2026-01-26 00:09:39.592006439 +0000 UTC m=+24.509600785,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 26 00:09:40 crc kubenswrapper[5107]: > Jan 26 00:09:40 crc kubenswrapper[5107]: E0126 00:09:40.102556 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5b1111fdd2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:45994->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:39.59206037 +0000 UTC m=+24.509654716,LastTimestamp:2026-01-26 00:09:39.59206037 +0000 UTC m=+24.509654716,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:40 crc kubenswrapper[5107]: E0126 00:09:40.106929 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 26 00:09:40 crc kubenswrapper[5107]: &Event{ObjectMeta:{kube-apiserver-crc.188e1f5b111719de openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 26 00:09:40 crc kubenswrapper[5107]: body: Jan 26 00:09:40 crc kubenswrapper[5107]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:39.59239523 +0000 UTC m=+24.509989566,LastTimestamp:2026-01-26 00:09:39.59239523 +0000 UTC m=+24.509989566,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 26 00:09:40 crc kubenswrapper[5107]: > Jan 26 00:09:40 crc kubenswrapper[5107]: E0126 00:09:40.111076 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5b1117d2a3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:39.592442531 +0000 UTC m=+24.510036877,LastTimestamp:2026-01-26 00:09:39.592442531 +0000 UTC m=+24.510036877,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:40 crc kubenswrapper[5107]: I0126 00:09:40.243103 5107 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 00:09:40 crc kubenswrapper[5107]: I0126 00:09:40.243185 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 00:09:40 crc kubenswrapper[5107]: E0126 00:09:40.251423 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 26 00:09:40 crc kubenswrapper[5107]: &Event{ObjectMeta:{kube-apiserver-crc.188e1f5b37e0ef66 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 26 00:09:40 crc kubenswrapper[5107]: body: Jan 26 00:09:40 crc kubenswrapper[5107]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:40.243156838 +0000 UTC m=+25.160751184,LastTimestamp:2026-01-26 00:09:40.243156838 +0000 UTC m=+25.160751184,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 26 00:09:40 crc kubenswrapper[5107]: > Jan 26 00:09:40 crc kubenswrapper[5107]: E0126 00:09:40.256494 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5b37e1bfc4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:40.24321018 +0000 UTC m=+25.160804516,LastTimestamp:2026-01-26 00:09:40.24321018 +0000 UTC m=+25.160804516,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:40 crc kubenswrapper[5107]: I0126 00:09:40.341434 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 26 00:09:40 crc kubenswrapper[5107]: I0126 00:09:40.344154 5107 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b1aca2d73febffe83b4f184385b1823dcd40098f6bdf6a1b1c46b26f7017dda0" exitCode=255 Jan 26 00:09:40 crc kubenswrapper[5107]: I0126 00:09:40.344236 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"b1aca2d73febffe83b4f184385b1823dcd40098f6bdf6a1b1c46b26f7017dda0"} Jan 26 00:09:40 crc kubenswrapper[5107]: I0126 00:09:40.344955 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:40 crc kubenswrapper[5107]: I0126 00:09:40.345827 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:40 crc kubenswrapper[5107]: I0126 00:09:40.345903 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:40 crc kubenswrapper[5107]: I0126 00:09:40.345914 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:40 crc kubenswrapper[5107]: E0126 00:09:40.346399 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:40 crc kubenswrapper[5107]: I0126 00:09:40.346786 5107 scope.go:117] "RemoveContainer" containerID="b1aca2d73febffe83b4f184385b1823dcd40098f6bdf6a1b1c46b26f7017dda0" Jan 26 00:09:40 crc kubenswrapper[5107]: E0126 00:09:40.377863 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f56d27d0326\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f56d27d0326 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:21.362240294 +0000 UTC m=+6.279834640,LastTimestamp:2026-01-26 00:09:40.348048064 +0000 UTC m=+25.265642410,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:41 crc kubenswrapper[5107]: I0126 00:09:41.073342 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:41 crc kubenswrapper[5107]: E0126 00:09:41.147855 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f56f2c766bf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f56f2c766bf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:21.903986367 +0000 UTC m=+6.821580713,LastTimestamp:2026-01-26 00:09:41.139782683 +0000 UTC m=+26.057377049,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:41 crc kubenswrapper[5107]: E0126 00:09:41.198215 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f56f4caeacd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f56f4caeacd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:21.937771213 +0000 UTC m=+6.855365559,LastTimestamp:2026-01-26 00:09:41.193312466 +0000 UTC m=+26.110906812,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:41 crc kubenswrapper[5107]: I0126 00:09:41.350515 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 26 00:09:41 crc kubenswrapper[5107]: I0126 00:09:41.351943 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"2a2910ec71701ba2ee01bbce2339a8859cae3913f3e8c07bd1a5ca36f18562e6"} Jan 26 00:09:41 crc kubenswrapper[5107]: I0126 00:09:41.352128 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:41 crc kubenswrapper[5107]: I0126 00:09:41.352614 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:41 crc kubenswrapper[5107]: I0126 00:09:41.352639 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:41 crc kubenswrapper[5107]: I0126 00:09:41.352650 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:41 crc kubenswrapper[5107]: E0126 00:09:41.352948 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:41 crc kubenswrapper[5107]: I0126 00:09:41.501233 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:41 crc kubenswrapper[5107]: I0126 00:09:41.501554 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:41 crc kubenswrapper[5107]: I0126 00:09:41.503319 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:41 crc kubenswrapper[5107]: I0126 00:09:41.503362 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:41 crc kubenswrapper[5107]: I0126 00:09:41.503380 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:41 crc kubenswrapper[5107]: E0126 00:09:41.503870 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:41 crc kubenswrapper[5107]: I0126 00:09:41.506376 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:42 crc kubenswrapper[5107]: I0126 00:09:42.000459 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:42 crc kubenswrapper[5107]: I0126 00:09:42.355230 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:42 crc kubenswrapper[5107]: I0126 00:09:42.356386 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:42 crc kubenswrapper[5107]: I0126 00:09:42.356468 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:42 crc kubenswrapper[5107]: I0126 00:09:42.356480 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:42 crc kubenswrapper[5107]: E0126 00:09:42.356909 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:43 crc kubenswrapper[5107]: I0126 00:09:43.002407 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:43 crc kubenswrapper[5107]: E0126 00:09:43.311274 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 26 00:09:43 crc kubenswrapper[5107]: E0126 00:09:43.584603 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 26 00:09:44 crc kubenswrapper[5107]: I0126 00:09:44.000867 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:44 crc kubenswrapper[5107]: I0126 00:09:44.361146 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 26 00:09:44 crc kubenswrapper[5107]: I0126 00:09:44.361613 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 26 00:09:44 crc kubenswrapper[5107]: I0126 00:09:44.363274 5107 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="2a2910ec71701ba2ee01bbce2339a8859cae3913f3e8c07bd1a5ca36f18562e6" exitCode=255 Jan 26 00:09:44 crc kubenswrapper[5107]: I0126 00:09:44.363362 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"2a2910ec71701ba2ee01bbce2339a8859cae3913f3e8c07bd1a5ca36f18562e6"} Jan 26 00:09:44 crc kubenswrapper[5107]: I0126 00:09:44.363430 5107 scope.go:117] "RemoveContainer" containerID="b1aca2d73febffe83b4f184385b1823dcd40098f6bdf6a1b1c46b26f7017dda0" Jan 26 00:09:44 crc kubenswrapper[5107]: I0126 00:09:44.363687 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:44 crc kubenswrapper[5107]: I0126 00:09:44.364337 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:44 crc kubenswrapper[5107]: I0126 00:09:44.364415 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:44 crc kubenswrapper[5107]: I0126 00:09:44.364449 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:44 crc kubenswrapper[5107]: E0126 00:09:44.365054 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:44 crc kubenswrapper[5107]: I0126 00:09:44.365400 5107 scope.go:117] "RemoveContainer" containerID="2a2910ec71701ba2ee01bbce2339a8859cae3913f3e8c07bd1a5ca36f18562e6" Jan 26 00:09:44 crc kubenswrapper[5107]: E0126 00:09:44.365722 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:09:44 crc kubenswrapper[5107]: E0126 00:09:44.373449 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5c2d99b51e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:44.365683998 +0000 UTC m=+29.283278344,LastTimestamp:2026-01-26 00:09:44.365683998 +0000 UTC m=+29.283278344,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:44 crc kubenswrapper[5107]: E0126 00:09:44.628194 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 26 00:09:45 crc kubenswrapper[5107]: I0126 00:09:45.000383 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:45 crc kubenswrapper[5107]: I0126 00:09:45.367295 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 26 00:09:45 crc kubenswrapper[5107]: E0126 00:09:45.678451 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 26 00:09:45 crc kubenswrapper[5107]: I0126 00:09:45.999385 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:46 crc kubenswrapper[5107]: E0126 00:09:46.403008 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 26 00:09:46 crc kubenswrapper[5107]: I0126 00:09:46.570247 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:46 crc kubenswrapper[5107]: I0126 00:09:46.571129 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:46 crc kubenswrapper[5107]: I0126 00:09:46.571188 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:46 crc kubenswrapper[5107]: I0126 00:09:46.571200 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:46 crc kubenswrapper[5107]: I0126 00:09:46.571226 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:46 crc kubenswrapper[5107]: E0126 00:09:46.580437 5107 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 26 00:09:46 crc kubenswrapper[5107]: E0126 00:09:46.617730 5107 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:09:47 crc kubenswrapper[5107]: I0126 00:09:47.066217 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:48 crc kubenswrapper[5107]: I0126 00:09:48.000243 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:48 crc kubenswrapper[5107]: I0126 00:09:48.999747 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:50 crc kubenswrapper[5107]: I0126 00:09:50.000280 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:50 crc kubenswrapper[5107]: I0126 00:09:50.243118 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:50 crc kubenswrapper[5107]: I0126 00:09:50.243389 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:50 crc kubenswrapper[5107]: I0126 00:09:50.244122 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:50 crc kubenswrapper[5107]: I0126 00:09:50.244152 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:50 crc kubenswrapper[5107]: I0126 00:09:50.244161 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:50 crc kubenswrapper[5107]: E0126 00:09:50.244425 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:50 crc kubenswrapper[5107]: I0126 00:09:50.244658 5107 scope.go:117] "RemoveContainer" containerID="2a2910ec71701ba2ee01bbce2339a8859cae3913f3e8c07bd1a5ca36f18562e6" Jan 26 00:09:50 crc kubenswrapper[5107]: E0126 00:09:50.244829 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:09:50 crc kubenswrapper[5107]: E0126 00:09:50.250657 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f5c2d99b51e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5c2d99b51e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:44.365683998 +0000 UTC m=+29.283278344,LastTimestamp:2026-01-26 00:09:50.244801806 +0000 UTC m=+35.162396152,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:51 crc kubenswrapper[5107]: I0126 00:09:51.000966 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:51 crc kubenswrapper[5107]: I0126 00:09:51.353562 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:51 crc kubenswrapper[5107]: I0126 00:09:51.353957 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:51 crc kubenswrapper[5107]: I0126 00:09:51.355024 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:51 crc kubenswrapper[5107]: I0126 00:09:51.355073 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:51 crc kubenswrapper[5107]: I0126 00:09:51.355089 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:51 crc kubenswrapper[5107]: E0126 00:09:51.355580 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:51 crc kubenswrapper[5107]: I0126 00:09:51.356026 5107 scope.go:117] "RemoveContainer" containerID="2a2910ec71701ba2ee01bbce2339a8859cae3913f3e8c07bd1a5ca36f18562e6" Jan 26 00:09:51 crc kubenswrapper[5107]: E0126 00:09:51.356356 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:09:51 crc kubenswrapper[5107]: E0126 00:09:51.365607 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f5c2d99b51e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5c2d99b51e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:44.365683998 +0000 UTC m=+29.283278344,LastTimestamp:2026-01-26 00:09:51.356304933 +0000 UTC m=+36.273899299,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:52 crc kubenswrapper[5107]: I0126 00:09:52.073435 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:52 crc kubenswrapper[5107]: E0126 00:09:52.686513 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 26 00:09:53 crc kubenswrapper[5107]: I0126 00:09:53.000479 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:53 crc kubenswrapper[5107]: I0126 00:09:53.581425 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:53 crc kubenswrapper[5107]: I0126 00:09:53.582177 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:53 crc kubenswrapper[5107]: I0126 00:09:53.582204 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:53 crc kubenswrapper[5107]: I0126 00:09:53.582212 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:53 crc kubenswrapper[5107]: I0126 00:09:53.582234 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:53 crc kubenswrapper[5107]: E0126 00:09:53.590357 5107 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 26 00:09:54 crc kubenswrapper[5107]: I0126 00:09:54.002523 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:55 crc kubenswrapper[5107]: I0126 00:09:55.003653 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:56 crc kubenswrapper[5107]: I0126 00:09:56.001178 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:56 crc kubenswrapper[5107]: E0126 00:09:56.619228 5107 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:09:57 crc kubenswrapper[5107]: I0126 00:09:57.001267 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:57 crc kubenswrapper[5107]: E0126 00:09:57.243098 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 26 00:09:58 crc kubenswrapper[5107]: I0126 00:09:58.000971 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:58 crc kubenswrapper[5107]: E0126 00:09:58.472131 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 26 00:09:59 crc kubenswrapper[5107]: I0126 00:09:59.001180 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:59 crc kubenswrapper[5107]: E0126 00:09:59.694071 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 26 00:10:00 crc kubenswrapper[5107]: I0126 00:10:00.001704 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:00 crc kubenswrapper[5107]: I0126 00:10:00.590949 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:00 crc kubenswrapper[5107]: I0126 00:10:00.592163 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:00 crc kubenswrapper[5107]: I0126 00:10:00.592227 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:00 crc kubenswrapper[5107]: I0126 00:10:00.592243 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:00 crc kubenswrapper[5107]: I0126 00:10:00.592272 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:10:00 crc kubenswrapper[5107]: E0126 00:10:00.604314 5107 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 26 00:10:01 crc kubenswrapper[5107]: I0126 00:10:01.003147 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:02 crc kubenswrapper[5107]: I0126 00:10:02.013323 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:02 crc kubenswrapper[5107]: I0126 00:10:02.112766 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:02 crc kubenswrapper[5107]: I0126 00:10:02.114476 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:02 crc kubenswrapper[5107]: I0126 00:10:02.115044 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:02 crc kubenswrapper[5107]: I0126 00:10:02.115116 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:02 crc kubenswrapper[5107]: E0126 00:10:02.115700 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:10:02 crc kubenswrapper[5107]: I0126 00:10:02.116102 5107 scope.go:117] "RemoveContainer" containerID="2a2910ec71701ba2ee01bbce2339a8859cae3913f3e8c07bd1a5ca36f18562e6" Jan 26 00:10:02 crc kubenswrapper[5107]: E0126 00:10:02.125026 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f56d27d0326\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f56d27d0326 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:21.362240294 +0000 UTC m=+6.279834640,LastTimestamp:2026-01-26 00:10:02.117874094 +0000 UTC m=+47.035468460,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:10:03 crc kubenswrapper[5107]: I0126 00:10:03.003064 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:03 crc kubenswrapper[5107]: E0126 00:10:03.546254 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f56f2c766bf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f56f2c766bf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:21.903986367 +0000 UTC m=+6.821580713,LastTimestamp:2026-01-26 00:10:03.540764427 +0000 UTC m=+48.458358773,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:10:03 crc kubenswrapper[5107]: I0126 00:10:03.552693 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 26 00:10:03 crc kubenswrapper[5107]: I0126 00:10:03.554243 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"cce459ad004254e8afec72b815e731aa25828326ffe317a8dd4ac064ffc744fb"} Jan 26 00:10:03 crc kubenswrapper[5107]: E0126 00:10:03.687324 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 26 00:10:03 crc kubenswrapper[5107]: E0126 00:10:03.819438 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f56f4caeacd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f56f4caeacd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:21.937771213 +0000 UTC m=+6.855365559,LastTimestamp:2026-01-26 00:10:03.812361521 +0000 UTC m=+48.729955877,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:10:04 crc kubenswrapper[5107]: I0126 00:10:04.000560 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:04 crc kubenswrapper[5107]: E0126 00:10:04.462972 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 26 00:10:04 crc kubenswrapper[5107]: I0126 00:10:04.558208 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:04 crc kubenswrapper[5107]: I0126 00:10:04.559071 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:04 crc kubenswrapper[5107]: I0126 00:10:04.559117 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:04 crc kubenswrapper[5107]: I0126 00:10:04.559129 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:04 crc kubenswrapper[5107]: E0126 00:10:04.559502 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:10:05 crc kubenswrapper[5107]: I0126 00:10:05.000988 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:05 crc kubenswrapper[5107]: I0126 00:10:05.135058 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:10:05 crc kubenswrapper[5107]: I0126 00:10:05.135299 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:05 crc kubenswrapper[5107]: I0126 00:10:05.137202 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:05 crc kubenswrapper[5107]: I0126 00:10:05.137237 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:05 crc kubenswrapper[5107]: I0126 00:10:05.137249 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:05 crc kubenswrapper[5107]: E0126 00:10:05.137564 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:10:05 crc kubenswrapper[5107]: I0126 00:10:05.562333 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 26 00:10:05 crc kubenswrapper[5107]: I0126 00:10:05.563434 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 26 00:10:05 crc kubenswrapper[5107]: I0126 00:10:05.565190 5107 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="cce459ad004254e8afec72b815e731aa25828326ffe317a8dd4ac064ffc744fb" exitCode=255 Jan 26 00:10:05 crc kubenswrapper[5107]: I0126 00:10:05.565286 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"cce459ad004254e8afec72b815e731aa25828326ffe317a8dd4ac064ffc744fb"} Jan 26 00:10:05 crc kubenswrapper[5107]: I0126 00:10:05.565323 5107 scope.go:117] "RemoveContainer" containerID="2a2910ec71701ba2ee01bbce2339a8859cae3913f3e8c07bd1a5ca36f18562e6" Jan 26 00:10:05 crc kubenswrapper[5107]: I0126 00:10:05.565771 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:05 crc kubenswrapper[5107]: I0126 00:10:05.566667 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:05 crc kubenswrapper[5107]: I0126 00:10:05.566743 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:05 crc kubenswrapper[5107]: I0126 00:10:05.566759 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:05 crc kubenswrapper[5107]: E0126 00:10:05.567398 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:10:05 crc kubenswrapper[5107]: I0126 00:10:05.567832 5107 scope.go:117] "RemoveContainer" containerID="cce459ad004254e8afec72b815e731aa25828326ffe317a8dd4ac064ffc744fb" Jan 26 00:10:05 crc kubenswrapper[5107]: E0126 00:10:05.568254 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:10:05 crc kubenswrapper[5107]: E0126 00:10:05.574414 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f5c2d99b51e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5c2d99b51e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:44.365683998 +0000 UTC m=+29.283278344,LastTimestamp:2026-01-26 00:10:05.568182798 +0000 UTC m=+50.485777154,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:10:06 crc kubenswrapper[5107]: I0126 00:10:06.000643 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:06 crc kubenswrapper[5107]: I0126 00:10:06.570139 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 26 00:10:06 crc kubenswrapper[5107]: E0126 00:10:06.620540 5107 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:10:06 crc kubenswrapper[5107]: E0126 00:10:06.703274 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 26 00:10:06 crc kubenswrapper[5107]: I0126 00:10:06.999802 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:07 crc kubenswrapper[5107]: I0126 00:10:07.604787 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:07 crc kubenswrapper[5107]: I0126 00:10:07.605876 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:07 crc kubenswrapper[5107]: I0126 00:10:07.605927 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:07 crc kubenswrapper[5107]: I0126 00:10:07.605940 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:07 crc kubenswrapper[5107]: I0126 00:10:07.605965 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:10:07 crc kubenswrapper[5107]: E0126 00:10:07.618171 5107 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 26 00:10:08 crc kubenswrapper[5107]: I0126 00:10:08.002033 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:09 crc kubenswrapper[5107]: I0126 00:10:09.001905 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:10 crc kubenswrapper[5107]: I0126 00:10:10.002460 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:10 crc kubenswrapper[5107]: I0126 00:10:10.242714 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:10:10 crc kubenswrapper[5107]: I0126 00:10:10.243281 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:10 crc kubenswrapper[5107]: I0126 00:10:10.244655 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:10 crc kubenswrapper[5107]: I0126 00:10:10.244700 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:10 crc kubenswrapper[5107]: I0126 00:10:10.244714 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:10 crc kubenswrapper[5107]: E0126 00:10:10.245241 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:10:10 crc kubenswrapper[5107]: I0126 00:10:10.245626 5107 scope.go:117] "RemoveContainer" containerID="cce459ad004254e8afec72b815e731aa25828326ffe317a8dd4ac064ffc744fb" Jan 26 00:10:10 crc kubenswrapper[5107]: E0126 00:10:10.245949 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:10:10 crc kubenswrapper[5107]: E0126 00:10:10.252081 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f5c2d99b51e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5c2d99b51e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:44.365683998 +0000 UTC m=+29.283278344,LastTimestamp:2026-01-26 00:10:10.24587455 +0000 UTC m=+55.163468896,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:10:11 crc kubenswrapper[5107]: I0126 00:10:11.000319 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:11 crc kubenswrapper[5107]: I0126 00:10:11.999362 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:13 crc kubenswrapper[5107]: I0126 00:10:13.001107 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:13 crc kubenswrapper[5107]: E0126 00:10:13.709766 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 26 00:10:14 crc kubenswrapper[5107]: I0126 00:10:14.001875 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:14 crc kubenswrapper[5107]: I0126 00:10:14.559006 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:10:14 crc kubenswrapper[5107]: I0126 00:10:14.559442 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:14 crc kubenswrapper[5107]: I0126 00:10:14.560985 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:14 crc kubenswrapper[5107]: I0126 00:10:14.561028 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:14 crc kubenswrapper[5107]: I0126 00:10:14.561040 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:14 crc kubenswrapper[5107]: E0126 00:10:14.561437 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:10:14 crc kubenswrapper[5107]: I0126 00:10:14.561771 5107 scope.go:117] "RemoveContainer" containerID="cce459ad004254e8afec72b815e731aa25828326ffe317a8dd4ac064ffc744fb" Jan 26 00:10:14 crc kubenswrapper[5107]: E0126 00:10:14.562010 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:10:14 crc kubenswrapper[5107]: E0126 00:10:14.567685 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f5c2d99b51e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5c2d99b51e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:44.365683998 +0000 UTC m=+29.283278344,LastTimestamp:2026-01-26 00:10:14.561974418 +0000 UTC m=+59.479568764,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:10:14 crc kubenswrapper[5107]: I0126 00:10:14.618443 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:14 crc kubenswrapper[5107]: I0126 00:10:14.620007 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:14 crc kubenswrapper[5107]: I0126 00:10:14.620077 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:14 crc kubenswrapper[5107]: I0126 00:10:14.620093 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:14 crc kubenswrapper[5107]: I0126 00:10:14.620132 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:10:14 crc kubenswrapper[5107]: E0126 00:10:14.633948 5107 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 26 00:10:14 crc kubenswrapper[5107]: I0126 00:10:14.997348 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:16 crc kubenswrapper[5107]: I0126 00:10:16.001359 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:16 crc kubenswrapper[5107]: E0126 00:10:16.621121 5107 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:10:17 crc kubenswrapper[5107]: I0126 00:10:17.001195 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:18 crc kubenswrapper[5107]: I0126 00:10:18.002141 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:19 crc kubenswrapper[5107]: I0126 00:10:19.001078 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:20 crc kubenswrapper[5107]: I0126 00:10:20.001909 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:20 crc kubenswrapper[5107]: E0126 00:10:20.715947 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 26 00:10:21 crc kubenswrapper[5107]: I0126 00:10:21.001989 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:21 crc kubenswrapper[5107]: I0126 00:10:21.634432 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:21 crc kubenswrapper[5107]: I0126 00:10:21.636301 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:21 crc kubenswrapper[5107]: I0126 00:10:21.636394 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:21 crc kubenswrapper[5107]: I0126 00:10:21.636411 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:21 crc kubenswrapper[5107]: I0126 00:10:21.636446 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:10:21 crc kubenswrapper[5107]: E0126 00:10:21.648356 5107 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 26 00:10:22 crc kubenswrapper[5107]: I0126 00:10:21.999852 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:22 crc kubenswrapper[5107]: I0126 00:10:22.458925 5107 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-k8nbw" Jan 26 00:10:22 crc kubenswrapper[5107]: I0126 00:10:22.471498 5107 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-k8nbw" Jan 26 00:10:22 crc kubenswrapper[5107]: I0126 00:10:22.543436 5107 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 26 00:10:22 crc kubenswrapper[5107]: I0126 00:10:22.750431 5107 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 26 00:10:23 crc kubenswrapper[5107]: I0126 00:10:23.473428 5107 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-02-25 00:05:22 +0000 UTC" deadline="2026-02-21 07:42:09.170968787 +0000 UTC" Jan 26 00:10:23 crc kubenswrapper[5107]: I0126 00:10:23.473550 5107 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="631h31m45.69742712s" Jan 26 00:10:26 crc kubenswrapper[5107]: E0126 00:10:26.622537 5107 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.449502 5107 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.527063 5107 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.543736 5107 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.644176 5107 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.649420 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.651179 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.651292 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.651310 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.651540 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.662986 5107 kubelet_node_status.go:127] "Node was previously registered" node="crc" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.663391 5107 kubelet_node_status.go:81] "Successfully registered node" node="crc" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.664592 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.664685 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.664701 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.664727 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.664741 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:28Z","lastTransitionTime":"2026-01-26T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:28 crc kubenswrapper[5107]: E0126 00:10:28.685007 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"066ffcb3-e507-457f-8c26-3fe6d538369f\\\",\\\"systemUUID\\\":\\\"d9c41fe3-854d-4f0f-b42d-bfcf817b111c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.691092 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.691153 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.691168 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.691187 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.691201 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:28Z","lastTransitionTime":"2026-01-26T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:28 crc kubenswrapper[5107]: E0126 00:10:28.705989 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"066ffcb3-e507-457f-8c26-3fe6d538369f\\\",\\\"systemUUID\\\":\\\"d9c41fe3-854d-4f0f-b42d-bfcf817b111c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.710663 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.710734 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.710748 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.710771 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.710789 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:28Z","lastTransitionTime":"2026-01-26T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:28 crc kubenswrapper[5107]: E0126 00:10:28.724483 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"066ffcb3-e507-457f-8c26-3fe6d538369f\\\",\\\"systemUUID\\\":\\\"d9c41fe3-854d-4f0f-b42d-bfcf817b111c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.732584 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.732663 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.732685 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.732715 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.732731 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:28Z","lastTransitionTime":"2026-01-26T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.752334 5107 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:10:28 crc kubenswrapper[5107]: E0126 00:10:28.754683 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"066ffcb3-e507-457f-8c26-3fe6d538369f\\\",\\\"systemUUID\\\":\\\"d9c41fe3-854d-4f0f-b42d-bfcf817b111c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.760697 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.760763 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.760774 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.760796 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.760831 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:28Z","lastTransitionTime":"2026-01-26T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:28 crc kubenswrapper[5107]: E0126 00:10:28.774174 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"066ffcb3-e507-457f-8c26-3fe6d538369f\\\",\\\"systemUUID\\\":\\\"d9c41fe3-854d-4f0f-b42d-bfcf817b111c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:28 crc kubenswrapper[5107]: E0126 00:10:28.774837 5107 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.776530 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.776587 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.776602 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.776622 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.776634 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:28Z","lastTransitionTime":"2026-01-26T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.844179 5107 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.879526 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.879590 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.879604 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.879629 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.879646 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:28Z","lastTransitionTime":"2026-01-26T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.983117 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.983190 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.983210 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.983233 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:28 crc kubenswrapper[5107]: I0126 00:10:28.983247 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:28Z","lastTransitionTime":"2026-01-26T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.086444 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.086506 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.086520 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.086543 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.086560 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:29Z","lastTransitionTime":"2026-01-26T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.146851 5107 apiserver.go:52] "Watching apiserver" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.156077 5107 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.157059 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-multus/network-metrics-daemon-bdn4m","openshift-machine-config-operator/machine-config-daemon-94c4c","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-node-identity/network-node-identity-dgvkt","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-multus/multus-additional-cni-plugins-4vppd","openshift-multus/multus-f2mpq","openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-operator/iptables-alerter-5jnd7","openshift-ovn-kubernetes/ovnkube-node-nvznv","openshift-dns/node-resolver-wbn74","openshift-image-registry/node-ca-p96sx","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv"] Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.158624 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.159656 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.159767 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.162360 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.162533 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.163299 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.164278 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.164302 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.164368 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.164862 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.165151 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.165319 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.166668 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.167268 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.168391 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.168744 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.183503 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.188873 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.189049 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.189153 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.189254 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.189372 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:29Z","lastTransitionTime":"2026-01-26T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.198165 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.210693 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.223569 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.238431 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.250973 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.291810 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.291879 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.291920 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.291943 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.291958 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:29Z","lastTransitionTime":"2026-01-26T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.293111 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.293159 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.293188 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.293222 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.293251 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.293288 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.293310 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.293341 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.293374 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.293397 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.293429 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.293473 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4f6f097f-b642-4bc7-ae13-b78dad78b73e-serviceca\") pod \"node-ca-p96sx\" (UID: \"4f6f097f-b642-4bc7-ae13-b78dad78b73e\") " pod="openshift-image-registry/node-ca-p96sx" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.293502 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ptwt\" (UniqueName: \"kubernetes.io/projected/4f6f097f-b642-4bc7-ae13-b78dad78b73e-kube-api-access-5ptwt\") pod \"node-ca-p96sx\" (UID: \"4f6f097f-b642-4bc7-ae13-b78dad78b73e\") " pod="openshift-image-registry/node-ca-p96sx" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.293529 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.293559 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f6f097f-b642-4bc7-ae13-b78dad78b73e-host\") pod \"node-ca-p96sx\" (UID: \"4f6f097f-b642-4bc7-ae13-b78dad78b73e\") " pod="openshift-image-registry/node-ca-p96sx" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.293598 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.293656 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.294279 5107 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.294534 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:29.79441077 +0000 UTC m=+74.712005116 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.294329 5107 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.294752 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:29.794740939 +0000 UTC m=+74.712335295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.308054 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.308107 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.308128 5107 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.308258 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:29.808225725 +0000 UTC m=+74.725820071 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.310276 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.310298 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.310310 5107 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.310366 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:29.810352345 +0000 UTC m=+74.727946681 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.394133 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.394228 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.394272 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4f6f097f-b642-4bc7-ae13-b78dad78b73e-serviceca\") pod \"node-ca-p96sx\" (UID: \"4f6f097f-b642-4bc7-ae13-b78dad78b73e\") " pod="openshift-image-registry/node-ca-p96sx" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.394300 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5ptwt\" (UniqueName: \"kubernetes.io/projected/4f6f097f-b642-4bc7-ae13-b78dad78b73e-kube-api-access-5ptwt\") pod \"node-ca-p96sx\" (UID: \"4f6f097f-b642-4bc7-ae13-b78dad78b73e\") " pod="openshift-image-registry/node-ca-p96sx" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.394316 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f6f097f-b642-4bc7-ae13-b78dad78b73e-host\") pod \"node-ca-p96sx\" (UID: \"4f6f097f-b642-4bc7-ae13-b78dad78b73e\") " pod="openshift-image-registry/node-ca-p96sx" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.394302 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.394400 5107 configmap.go:193] Couldn't get configMap openshift-image-registry/image-registry-certificates: object "openshift-image-registry"/"image-registry-certificates" not registered Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.394516 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f6f097f-b642-4bc7-ae13-b78dad78b73e-serviceca podName:4f6f097f-b642-4bc7-ae13-b78dad78b73e nodeName:}" failed. No retries permitted until 2026-01-26 00:10:29.894487601 +0000 UTC m=+74.812081947 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serviceca" (UniqueName: "kubernetes.io/configmap/4f6f097f-b642-4bc7-ae13-b78dad78b73e-serviceca") pod "node-ca-p96sx" (UID: "4f6f097f-b642-4bc7-ae13-b78dad78b73e") : object "openshift-image-registry"/"image-registry-certificates" not registered Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.394738 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.394793 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f6f097f-b642-4bc7-ae13-b78dad78b73e-host\") pod \"node-ca-p96sx\" (UID: \"4f6f097f-b642-4bc7-ae13-b78dad78b73e\") " pod="openshift-image-registry/node-ca-p96sx" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.395653 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.395707 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.395721 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.395739 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.395758 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:29Z","lastTransitionTime":"2026-01-26T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.407424 5107 projected.go:289] Couldn't get configMap openshift-image-registry/kube-root-ca.crt: object "openshift-image-registry"/"kube-root-ca.crt" not registered Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.407459 5107 projected.go:289] Couldn't get configMap openshift-image-registry/openshift-service-ca.crt: object "openshift-image-registry"/"openshift-service-ca.crt" not registered Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.407474 5107 projected.go:194] Error preparing data for projected volume kube-api-access-5ptwt for pod openshift-image-registry/node-ca-p96sx: [object "openshift-image-registry"/"kube-root-ca.crt" not registered, object "openshift-image-registry"/"openshift-service-ca.crt" not registered] Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.407564 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f6f097f-b642-4bc7-ae13-b78dad78b73e-kube-api-access-5ptwt podName:4f6f097f-b642-4bc7-ae13-b78dad78b73e nodeName:}" failed. No retries permitted until 2026-01-26 00:10:29.907530995 +0000 UTC m=+74.825125351 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5ptwt" (UniqueName: "kubernetes.io/projected/4f6f097f-b642-4bc7-ae13-b78dad78b73e-kube-api-access-5ptwt") pod "node-ca-p96sx" (UID: "4f6f097f-b642-4bc7-ae13-b78dad78b73e") : [object "openshift-image-registry"/"kube-root-ca.crt" not registered, object "openshift-image-registry"/"openshift-service-ca.crt" not registered] Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.498573 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.498659 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.498875 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.498941 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.498985 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:29Z","lastTransitionTime":"2026-01-26T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.546098 5107 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.546546 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.547194 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.547226 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.553441 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.553595 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.553837 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.553939 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.554625 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.602417 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.602488 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.602504 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.602534 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.602551 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:29Z","lastTransitionTime":"2026-01-26T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.705053 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.705122 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.705136 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.705154 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.705171 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:29Z","lastTransitionTime":"2026-01-26T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.766333 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-p96sx" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.766311 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.766582 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.769824 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.769871 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.770292 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.769861 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.777261 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.787535 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.789719 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.796976 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.797065 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.797184 5107 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.797193 5107 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.797284 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:30.797254734 +0000 UTC m=+75.714849080 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.797348 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:30.797328936 +0000 UTC m=+75.714923282 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.798057 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:29 crc kubenswrapper[5107]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 26 00:10:29 crc kubenswrapper[5107]: set -o allexport Jan 26 00:10:29 crc kubenswrapper[5107]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 26 00:10:29 crc kubenswrapper[5107]: source /etc/kubernetes/apiserver-url.env Jan 26 00:10:29 crc kubenswrapper[5107]: else Jan 26 00:10:29 crc kubenswrapper[5107]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 26 00:10:29 crc kubenswrapper[5107]: exit 1 Jan 26 00:10:29 crc kubenswrapper[5107]: fi Jan 26 00:10:29 crc kubenswrapper[5107]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 26 00:10:29 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:29 crc kubenswrapper[5107]: > logger="UnhandledError" Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.799667 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.803386 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.808728 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.808788 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.808807 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.808831 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.808851 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:29Z","lastTransitionTime":"2026-01-26T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.819976 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.832288 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.844134 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.847649 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.856172 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.861756 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:29 crc kubenswrapper[5107]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 26 00:10:29 crc kubenswrapper[5107]: if [[ -f "/env/_master" ]]; then Jan 26 00:10:29 crc kubenswrapper[5107]: set -o allexport Jan 26 00:10:29 crc kubenswrapper[5107]: source "/env/_master" Jan 26 00:10:29 crc kubenswrapper[5107]: set +o allexport Jan 26 00:10:29 crc kubenswrapper[5107]: fi Jan 26 00:10:29 crc kubenswrapper[5107]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 26 00:10:29 crc kubenswrapper[5107]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 26 00:10:29 crc kubenswrapper[5107]: ho_enable="--enable-hybrid-overlay" Jan 26 00:10:29 crc kubenswrapper[5107]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 26 00:10:29 crc kubenswrapper[5107]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 26 00:10:29 crc kubenswrapper[5107]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 26 00:10:29 crc kubenswrapper[5107]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 26 00:10:29 crc kubenswrapper[5107]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 26 00:10:29 crc kubenswrapper[5107]: --webhook-host=127.0.0.1 \ Jan 26 00:10:29 crc kubenswrapper[5107]: --webhook-port=9743 \ Jan 26 00:10:29 crc kubenswrapper[5107]: ${ho_enable} \ Jan 26 00:10:29 crc kubenswrapper[5107]: --enable-interconnect \ Jan 26 00:10:29 crc kubenswrapper[5107]: --disable-approver \ Jan 26 00:10:29 crc kubenswrapper[5107]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 26 00:10:29 crc kubenswrapper[5107]: --wait-for-kubernetes-api=200s \ Jan 26 00:10:29 crc kubenswrapper[5107]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 26 00:10:29 crc kubenswrapper[5107]: --loglevel="${LOGLEVEL}" Jan 26 00:10:29 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:29 crc kubenswrapper[5107]: > logger="UnhandledError" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.865460 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p96sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6f097f-b642-4bc7-ae13-b78dad78b73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ptwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p96sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.865859 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:29 crc kubenswrapper[5107]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 26 00:10:29 crc kubenswrapper[5107]: if [[ -f "/env/_master" ]]; then Jan 26 00:10:29 crc kubenswrapper[5107]: set -o allexport Jan 26 00:10:29 crc kubenswrapper[5107]: source "/env/_master" Jan 26 00:10:29 crc kubenswrapper[5107]: set +o allexport Jan 26 00:10:29 crc kubenswrapper[5107]: fi Jan 26 00:10:29 crc kubenswrapper[5107]: Jan 26 00:10:29 crc kubenswrapper[5107]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 26 00:10:29 crc kubenswrapper[5107]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 26 00:10:29 crc kubenswrapper[5107]: --disable-webhook \ Jan 26 00:10:29 crc kubenswrapper[5107]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 26 00:10:29 crc kubenswrapper[5107]: --loglevel="${LOGLEVEL}" Jan 26 00:10:29 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:29 crc kubenswrapper[5107]: > logger="UnhandledError" Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.867100 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 26 00:10:29 crc kubenswrapper[5107]: W0126 00:10:29.876488 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-2aa6430cf68dbef7e70de558d51909bdde3150a7e4e38c9f582f6b1d0e2d3c61 WatchSource:0}: Error finding container 2aa6430cf68dbef7e70de558d51909bdde3150a7e4e38c9f582f6b1d0e2d3c61: Status 404 returned error can't find the container with id 2aa6430cf68dbef7e70de558d51909bdde3150a7e4e38c9f582f6b1d0e2d3c61 Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.879547 5107 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.881319 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.883215 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.894484 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.897264 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnj62\" (UniqueName: \"kubernetes.io/projected/65e0e338-0636-411c-ac3c-9972beecf25b-kube-api-access-rnj62\") pod \"node-resolver-wbn74\" (UID: \"65e0e338-0636-411c-ac3c-9972beecf25b\") " pod="openshift-dns/node-resolver-wbn74" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.897435 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.897563 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.897706 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.897748 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/65e0e338-0636-411c-ac3c-9972beecf25b-hosts-file\") pod \"node-resolver-wbn74\" (UID: \"65e0e338-0636-411c-ac3c-9972beecf25b\") " pod="openshift-dns/node-resolver-wbn74" Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.897931 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.898043 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4f6f097f-b642-4bc7-ae13-b78dad78b73e-serviceca\") pod \"node-ca-p96sx\" (UID: \"4f6f097f-b642-4bc7-ae13-b78dad78b73e\") " pod="openshift-image-registry/node-ca-p96sx" Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.898099 5107 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.898180 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/65e0e338-0636-411c-ac3c-9972beecf25b-tmp-dir\") pod \"node-resolver-wbn74\" (UID: \"65e0e338-0636-411c-ac3c-9972beecf25b\") " pod="openshift-dns/node-resolver-wbn74" Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.898200 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:30.898171518 +0000 UTC m=+75.815765864 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.897856 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.898264 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.898279 5107 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:29 crc kubenswrapper[5107]: E0126 00:10:29.898347 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:30.898325002 +0000 UTC m=+75.815919348 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.899042 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4f6f097f-b642-4bc7-ae13-b78dad78b73e-serviceca\") pod \"node-ca-p96sx\" (UID: \"4f6f097f-b642-4bc7-ae13-b78dad78b73e\") " pod="openshift-image-registry/node-ca-p96sx" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.906030 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.911380 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.911442 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.911458 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.911480 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.911494 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:29Z","lastTransitionTime":"2026-01-26T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.912041 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wbn74" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.914600 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.914651 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.915701 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.922628 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.936852 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.948259 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p96sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6f097f-b642-4bc7-ae13-b78dad78b73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ptwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p96sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.962037 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.976091 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.987966 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.998368 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p96sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6f097f-b642-4bc7-ae13-b78dad78b73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ptwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p96sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.998421 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-cni-netd\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.998651 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d12cfb26-8718-4def-8f36-c7eaa12bc463-env-overrides\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.998702 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bm9q\" (UniqueName: \"kubernetes.io/projected/d12cfb26-8718-4def-8f36-c7eaa12bc463-kube-api-access-9bm9q\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.998752 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-etc-openvswitch\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.998902 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d12cfb26-8718-4def-8f36-c7eaa12bc463-ovn-node-metrics-cert\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.998942 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-kubelet\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.998994 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.999064 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d12cfb26-8718-4def-8f36-c7eaa12bc463-ovnkube-config\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.999101 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/65e0e338-0636-411c-ac3c-9972beecf25b-tmp-dir\") pod \"node-resolver-wbn74\" (UID: \"65e0e338-0636-411c-ac3c-9972beecf25b\") " pod="openshift-dns/node-resolver-wbn74" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.999133 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-run-systemd\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.999187 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5ptwt\" (UniqueName: \"kubernetes.io/projected/4f6f097f-b642-4bc7-ae13-b78dad78b73e-kube-api-access-5ptwt\") pod \"node-ca-p96sx\" (UID: \"4f6f097f-b642-4bc7-ae13-b78dad78b73e\") " pod="openshift-image-registry/node-ca-p96sx" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.999213 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-run-netns\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.999238 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-run-ovn-kubernetes\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.999266 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rnj62\" (UniqueName: \"kubernetes.io/projected/65e0e338-0636-411c-ac3c-9972beecf25b-kube-api-access-rnj62\") pod \"node-resolver-wbn74\" (UID: \"65e0e338-0636-411c-ac3c-9972beecf25b\") " pod="openshift-dns/node-resolver-wbn74" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.999284 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-slash\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.999301 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-log-socket\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.999323 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-cni-bin\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.999397 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-run-ovn\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.999476 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d12cfb26-8718-4def-8f36-c7eaa12bc463-ovnkube-script-lib\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.999498 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-var-lib-openvswitch\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.999542 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-systemd-units\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.999575 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/65e0e338-0636-411c-ac3c-9972beecf25b-hosts-file\") pod \"node-resolver-wbn74\" (UID: \"65e0e338-0636-411c-ac3c-9972beecf25b\") " pod="openshift-dns/node-resolver-wbn74" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.999595 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-run-openvswitch\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.999611 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-node-log\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.999750 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/65e0e338-0636-411c-ac3c-9972beecf25b-hosts-file\") pod \"node-resolver-wbn74\" (UID: \"65e0e338-0636-411c-ac3c-9972beecf25b\") " pod="openshift-dns/node-resolver-wbn74" Jan 26 00:10:29 crc kubenswrapper[5107]: I0126 00:10:29.999593 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/65e0e338-0636-411c-ac3c-9972beecf25b-tmp-dir\") pod \"node-resolver-wbn74\" (UID: \"65e0e338-0636-411c-ac3c-9972beecf25b\") " pod="openshift-dns/node-resolver-wbn74" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.004975 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ptwt\" (UniqueName: \"kubernetes.io/projected/4f6f097f-b642-4bc7-ae13-b78dad78b73e-kube-api-access-5ptwt\") pod \"node-ca-p96sx\" (UID: \"4f6f097f-b642-4bc7-ae13-b78dad78b73e\") " pod="openshift-image-registry/node-ca-p96sx" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.009091 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wbn74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e0e338-0636-411c-ac3c-9972beecf25b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnj62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wbn74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.013988 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.014036 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.014047 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.014066 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.014079 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:30Z","lastTransitionTime":"2026-01-26T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.016957 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnj62\" (UniqueName: \"kubernetes.io/projected/65e0e338-0636-411c-ac3c-9972beecf25b-kube-api-access-rnj62\") pod \"node-resolver-wbn74\" (UID: \"65e0e338-0636-411c-ac3c-9972beecf25b\") " pod="openshift-dns/node-resolver-wbn74" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.024601 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.036387 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.047898 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.061689 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.065596 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.068762 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.068945 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.069077 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.069297 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.069783 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.070556 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.070728 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.080813 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.085548 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-p96sx" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.096346 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: W0126 00:10:30.099609 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f6f097f_b642_4bc7_ae13_b78dad78b73e.slice/crio-0922df9d09ca0ed17e1453a634a259dac42adc2fbfbf8c8c68009bdf28a4c651 WatchSource:0}: Error finding container 0922df9d09ca0ed17e1453a634a259dac42adc2fbfbf8c8c68009bdf28a4c651: Status 404 returned error can't find the container with id 0922df9d09ca0ed17e1453a634a259dac42adc2fbfbf8c8c68009bdf28a4c651 Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.099801 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-var-lib-openvswitch\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.099935 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-var-lib-openvswitch\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.099946 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-systemd-units\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.100016 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-systemd-units\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.100034 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-run-openvswitch\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.100060 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-node-log\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.100081 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-cni-netd\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.100098 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d12cfb26-8718-4def-8f36-c7eaa12bc463-env-overrides\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.100120 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9bm9q\" (UniqueName: \"kubernetes.io/projected/d12cfb26-8718-4def-8f36-c7eaa12bc463-kube-api-access-9bm9q\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.100149 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-etc-openvswitch\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.100188 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d12cfb26-8718-4def-8f36-c7eaa12bc463-ovn-node-metrics-cert\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.100300 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-kubelet\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.100311 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-node-log\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.100378 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.100473 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-run-openvswitch\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.100319 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.100568 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d12cfb26-8718-4def-8f36-c7eaa12bc463-ovnkube-config\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.100618 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-run-systemd\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.100721 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-run-netns\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.100739 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-etc-openvswitch\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.100761 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-run-ovn-kubernetes\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.100792 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-run-ovn-kubernetes\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.100810 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-slash\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.100837 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d12cfb26-8718-4def-8f36-c7eaa12bc463-env-overrides\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.100903 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-log-socket\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.100933 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-cni-bin\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.100966 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-run-ovn\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.101002 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-run-ovn\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.101017 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-run-systemd\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.101106 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d12cfb26-8718-4def-8f36-c7eaa12bc463-ovnkube-script-lib\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.101199 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-run-netns\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.100932 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-cni-netd\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.101405 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-log-socket\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.101434 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-kubelet\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.101475 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-slash\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.101512 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-cni-bin\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.101862 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d12cfb26-8718-4def-8f36-c7eaa12bc463-ovnkube-script-lib\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.101956 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d12cfb26-8718-4def-8f36-c7eaa12bc463-ovnkube-config\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.102924 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:30 crc kubenswrapper[5107]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 26 00:10:30 crc kubenswrapper[5107]: while [ true ]; Jan 26 00:10:30 crc kubenswrapper[5107]: do Jan 26 00:10:30 crc kubenswrapper[5107]: for f in $(ls /tmp/serviceca); do Jan 26 00:10:30 crc kubenswrapper[5107]: echo $f Jan 26 00:10:30 crc kubenswrapper[5107]: ca_file_path="/tmp/serviceca/${f}" Jan 26 00:10:30 crc kubenswrapper[5107]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 26 00:10:30 crc kubenswrapper[5107]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 26 00:10:30 crc kubenswrapper[5107]: if [ -e "${reg_dir_path}" ]; then Jan 26 00:10:30 crc kubenswrapper[5107]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 26 00:10:30 crc kubenswrapper[5107]: else Jan 26 00:10:30 crc kubenswrapper[5107]: mkdir $reg_dir_path Jan 26 00:10:30 crc kubenswrapper[5107]: cp $ca_file_path $reg_dir_path/ca.crt Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: done Jan 26 00:10:30 crc kubenswrapper[5107]: for d in $(ls /etc/docker/certs.d); do Jan 26 00:10:30 crc kubenswrapper[5107]: echo $d Jan 26 00:10:30 crc kubenswrapper[5107]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 26 00:10:30 crc kubenswrapper[5107]: reg_conf_path="/tmp/serviceca/${dp}" Jan 26 00:10:30 crc kubenswrapper[5107]: if [ ! -e "${reg_conf_path}" ]; then Jan 26 00:10:30 crc kubenswrapper[5107]: rm -rf /etc/docker/certs.d/$d Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: done Jan 26 00:10:30 crc kubenswrapper[5107]: sleep 60 & wait ${!} Jan 26 00:10:30 crc kubenswrapper[5107]: done Jan 26 00:10:30 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ptwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-p96sx_openshift-image-registry(4f6f097f-b642-4bc7-ae13-b78dad78b73e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:30 crc kubenswrapper[5107]: > logger="UnhandledError" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.104160 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-p96sx" podUID="4f6f097f-b642-4bc7-ae13-b78dad78b73e" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.106745 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d12cfb26-8718-4def-8f36-c7eaa12bc463-ovn-node-metrics-cert\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.109531 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.116261 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.116304 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.116317 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.116334 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.116371 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:30Z","lastTransitionTime":"2026-01-26T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.116923 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bm9q\" (UniqueName: \"kubernetes.io/projected/d12cfb26-8718-4def-8f36-c7eaa12bc463-kube-api-access-9bm9q\") pod \"ovnkube-node-nvznv\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.122530 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.140406 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d12cfb26-8718-4def-8f36-c7eaa12bc463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nvznv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.151753 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.164068 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.173040 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p96sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6f097f-b642-4bc7-ae13-b78dad78b73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ptwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p96sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.184500 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wbn74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e0e338-0636-411c-ac3c-9972beecf25b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnj62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wbn74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.201805 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-metrics-certs\") pod \"network-metrics-daemon-bdn4m\" (UID: \"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\") " pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.201874 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmtjt\" (UniqueName: \"kubernetes.io/projected/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-kube-api-access-vmtjt\") pod \"network-metrics-daemon-bdn4m\" (UID: \"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\") " pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.219774 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.219842 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.219854 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.219877 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.219909 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:30Z","lastTransitionTime":"2026-01-26T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.228094 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wbn74" Jan 26 00:10:30 crc kubenswrapper[5107]: W0126 00:10:30.242242 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65e0e338_0636_411c_ac3c_9972beecf25b.slice/crio-e16f162685de7c566c8ef0a625d9e87e2d784d99a1cc2e21b9398bb112c8d304 WatchSource:0}: Error finding container e16f162685de7c566c8ef0a625d9e87e2d784d99a1cc2e21b9398bb112c8d304: Status 404 returned error can't find the container with id e16f162685de7c566c8ef0a625d9e87e2d784d99a1cc2e21b9398bb112c8d304 Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.245982 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:30 crc kubenswrapper[5107]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 26 00:10:30 crc kubenswrapper[5107]: set -uo pipefail Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 26 00:10:30 crc kubenswrapper[5107]: HOSTS_FILE="/etc/hosts" Jan 26 00:10:30 crc kubenswrapper[5107]: TEMP_FILE="/tmp/hosts.tmp" Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: # Make a temporary file with the old hosts file's attributes. Jan 26 00:10:30 crc kubenswrapper[5107]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 26 00:10:30 crc kubenswrapper[5107]: echo "Failed to preserve hosts file. Exiting." Jan 26 00:10:30 crc kubenswrapper[5107]: exit 1 Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: while true; do Jan 26 00:10:30 crc kubenswrapper[5107]: declare -A svc_ips Jan 26 00:10:30 crc kubenswrapper[5107]: for svc in "${services[@]}"; do Jan 26 00:10:30 crc kubenswrapper[5107]: # Fetch service IP from cluster dns if present. We make several tries Jan 26 00:10:30 crc kubenswrapper[5107]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 26 00:10:30 crc kubenswrapper[5107]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 26 00:10:30 crc kubenswrapper[5107]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 26 00:10:30 crc kubenswrapper[5107]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 26 00:10:30 crc kubenswrapper[5107]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 26 00:10:30 crc kubenswrapper[5107]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 26 00:10:30 crc kubenswrapper[5107]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 26 00:10:30 crc kubenswrapper[5107]: for i in ${!cmds[*]} Jan 26 00:10:30 crc kubenswrapper[5107]: do Jan 26 00:10:30 crc kubenswrapper[5107]: ips=($(eval "${cmds[i]}")) Jan 26 00:10:30 crc kubenswrapper[5107]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 26 00:10:30 crc kubenswrapper[5107]: svc_ips["${svc}"]="${ips[@]}" Jan 26 00:10:30 crc kubenswrapper[5107]: break Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: done Jan 26 00:10:30 crc kubenswrapper[5107]: done Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: # Update /etc/hosts only if we get valid service IPs Jan 26 00:10:30 crc kubenswrapper[5107]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 26 00:10:30 crc kubenswrapper[5107]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 26 00:10:30 crc kubenswrapper[5107]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 26 00:10:30 crc kubenswrapper[5107]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 26 00:10:30 crc kubenswrapper[5107]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 26 00:10:30 crc kubenswrapper[5107]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 26 00:10:30 crc kubenswrapper[5107]: sleep 60 & wait Jan 26 00:10:30 crc kubenswrapper[5107]: continue Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: # Append resolver entries for services Jan 26 00:10:30 crc kubenswrapper[5107]: rc=0 Jan 26 00:10:30 crc kubenswrapper[5107]: for svc in "${!svc_ips[@]}"; do Jan 26 00:10:30 crc kubenswrapper[5107]: for ip in ${svc_ips[${svc}]}; do Jan 26 00:10:30 crc kubenswrapper[5107]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 26 00:10:30 crc kubenswrapper[5107]: done Jan 26 00:10:30 crc kubenswrapper[5107]: done Jan 26 00:10:30 crc kubenswrapper[5107]: if [[ $rc -ne 0 ]]; then Jan 26 00:10:30 crc kubenswrapper[5107]: sleep 60 & wait Jan 26 00:10:30 crc kubenswrapper[5107]: continue Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 26 00:10:30 crc kubenswrapper[5107]: # Replace /etc/hosts with our modified version if needed Jan 26 00:10:30 crc kubenswrapper[5107]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 26 00:10:30 crc kubenswrapper[5107]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: sleep 60 & wait Jan 26 00:10:30 crc kubenswrapper[5107]: unset svc_ips Jan 26 00:10:30 crc kubenswrapper[5107]: done Jan 26 00:10:30 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rnj62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-wbn74_openshift-dns(65e0e338-0636-411c-ac3c-9972beecf25b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:30 crc kubenswrapper[5107]: > logger="UnhandledError" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.247191 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-wbn74" podUID="65e0e338-0636-411c-ac3c-9972beecf25b" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.302438 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vmtjt\" (UniqueName: \"kubernetes.io/projected/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-kube-api-access-vmtjt\") pod \"network-metrics-daemon-bdn4m\" (UID: \"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\") " pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.302673 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-metrics-certs\") pod \"network-metrics-daemon-bdn4m\" (UID: \"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\") " pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.302845 5107 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.302999 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-metrics-certs podName:93b5402e-3f3e-4e3b-8cf4-f919871d0c86 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:30.802942436 +0000 UTC m=+75.720536782 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-metrics-certs") pod "network-metrics-daemon-bdn4m" (UID: "93b5402e-3f3e-4e3b-8cf4-f919871d0c86") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.322130 5107 projected.go:289] Couldn't get configMap openshift-multus/kube-root-ca.crt: object "openshift-multus"/"kube-root-ca.crt" not registered Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.322177 5107 projected.go:289] Couldn't get configMap openshift-multus/openshift-service-ca.crt: object "openshift-multus"/"openshift-service-ca.crt" not registered Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.322195 5107 projected.go:194] Error preparing data for projected volume kube-api-access-vmtjt for pod openshift-multus/network-metrics-daemon-bdn4m: [object "openshift-multus"/"kube-root-ca.crt" not registered, object "openshift-multus"/"openshift-service-ca.crt" not registered] Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.322293 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-kube-api-access-vmtjt podName:93b5402e-3f3e-4e3b-8cf4-f919871d0c86 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:30.822263745 +0000 UTC m=+75.739858091 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vmtjt" (UniqueName: "kubernetes.io/projected/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-kube-api-access-vmtjt") pod "network-metrics-daemon-bdn4m" (UID: "93b5402e-3f3e-4e3b-8cf4-f919871d0c86") : [object "openshift-multus"/"kube-root-ca.crt" not registered, object "openshift-multus"/"openshift-service-ca.crt" not registered] Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.323143 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.323201 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.323228 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.323252 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.323274 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:30Z","lastTransitionTime":"2026-01-26T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.334317 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.334496 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.345156 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.348595 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.348594 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.350389 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.355218 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-4vppd" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.357414 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.358448 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.358505 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.358689 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.358808 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.358944 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.363265 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.365324 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.365806 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.366631 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.367771 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.368658 5107 scope.go:117] "RemoveContainer" containerID="cce459ad004254e8afec72b815e731aa25828326ffe317a8dd4ac064ffc744fb" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.370427 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.378374 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.380320 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.380547 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.381142 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.384782 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p96sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6f097f-b642-4bc7-ae13-b78dad78b73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ptwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p96sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.400209 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.404627 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wbn74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e0e338-0636-411c-ac3c-9972beecf25b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnj62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wbn74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.416440 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:30 crc kubenswrapper[5107]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 26 00:10:30 crc kubenswrapper[5107]: apiVersion: v1 Jan 26 00:10:30 crc kubenswrapper[5107]: clusters: Jan 26 00:10:30 crc kubenswrapper[5107]: - cluster: Jan 26 00:10:30 crc kubenswrapper[5107]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 26 00:10:30 crc kubenswrapper[5107]: server: https://api-int.crc.testing:6443 Jan 26 00:10:30 crc kubenswrapper[5107]: name: default-cluster Jan 26 00:10:30 crc kubenswrapper[5107]: contexts: Jan 26 00:10:30 crc kubenswrapper[5107]: - context: Jan 26 00:10:30 crc kubenswrapper[5107]: cluster: default-cluster Jan 26 00:10:30 crc kubenswrapper[5107]: namespace: default Jan 26 00:10:30 crc kubenswrapper[5107]: user: default-auth Jan 26 00:10:30 crc kubenswrapper[5107]: name: default-context Jan 26 00:10:30 crc kubenswrapper[5107]: current-context: default-context Jan 26 00:10:30 crc kubenswrapper[5107]: kind: Config Jan 26 00:10:30 crc kubenswrapper[5107]: preferences: {} Jan 26 00:10:30 crc kubenswrapper[5107]: users: Jan 26 00:10:30 crc kubenswrapper[5107]: - name: default-auth Jan 26 00:10:30 crc kubenswrapper[5107]: user: Jan 26 00:10:30 crc kubenswrapper[5107]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 26 00:10:30 crc kubenswrapper[5107]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 26 00:10:30 crc kubenswrapper[5107]: EOF Jan 26 00:10:30 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bm9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-nvznv_openshift-ovn-kubernetes(d12cfb26-8718-4def-8f36-c7eaa12bc463): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:30 crc kubenswrapper[5107]: > logger="UnhandledError" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.418832 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.425696 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.426174 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.426262 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.426332 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.426391 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:30Z","lastTransitionTime":"2026-01-26T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.430475 5107 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.430769 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.450420 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.464326 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdn4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdn4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.478355 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.491519 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.504610 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.504685 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.504708 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.504733 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.504751 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.504777 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.504800 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.504822 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.504842 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.504863 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.504942 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.504967 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.504985 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505001 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505020 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505038 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505056 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505072 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505092 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505115 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505140 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505160 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505180 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505201 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505234 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505257 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505274 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505295 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505319 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505345 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505368 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505390 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505411 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505429 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505479 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505499 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505523 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505544 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505563 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505584 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505606 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505625 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505643 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505662 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505685 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505707 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505728 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505747 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505766 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505784 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505801 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505818 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505834 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505857 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505904 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505938 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505964 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505992 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506025 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506057 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506086 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506112 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506148 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506173 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506202 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506222 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506243 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506268 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506293 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506322 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506349 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506375 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506405 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506433 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506464 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506488 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506511 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506546 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506574 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506596 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506617 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506641 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506661 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506679 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506698 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506718 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506736 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506755 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506778 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506804 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506828 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506845 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506868 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506901 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506920 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506941 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.507554 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.507635 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.507709 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.507756 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.508154 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.508452 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.508496 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.508741 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.508816 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.508843 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.508874 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.508954 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.508981 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.509009 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.509049 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.509161 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.509239 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.509267 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.509318 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.509356 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.509420 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.509448 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.510324 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.510412 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.510450 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.510643 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.511150 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.511425 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.511460 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.511524 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.511700 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.511732 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.511785 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.511820 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.512025 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.512112 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.512158 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.512214 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.512242 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.512295 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.512331 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.512378 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.505265 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.512423 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.506209 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.507178 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.507284 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.507428 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.507903 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.509135 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.509138 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.509162 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.509179 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.509677 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.509816 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.509856 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.510068 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.509911 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.510142 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.510236 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.510244 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.510365 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.511266 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.511283 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.511725 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.512389 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.512649 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.512825 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.513173 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.513273 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.513772 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.513743 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.514174 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.514401 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.514564 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.514639 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.515145 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.515619 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.515780 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.512407 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.516271 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.516329 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.516364 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.516710 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.516979 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.516399 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.517033 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.517042 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.517265 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.517294 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.517347 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.517414 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.517448 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.517512 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.517544 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.517623 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.517711 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.517468 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.517823 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.517967 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518079 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518125 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518158 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518195 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518228 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518259 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518302 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518330 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518364 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518392 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518424 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518465 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518497 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518526 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518559 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518589 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518660 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518696 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518732 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518762 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518790 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518822 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518856 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518906 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518935 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518966 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518998 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519027 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519054 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519090 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519120 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519152 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519183 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519211 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519237 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519276 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519308 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519336 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519365 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519412 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519442 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519471 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519497 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519525 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519557 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519593 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519621 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519648 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519675 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519703 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519732 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519758 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519785 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519813 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519842 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519901 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519930 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519984 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.520015 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.520045 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.520091 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.520122 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.520152 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.520185 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.521619 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d12cfb26-8718-4def-8f36-c7eaa12bc463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nvznv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518192 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518323 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518502 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.518684 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.519580 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.520176 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.520865 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.520906 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.521418 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.521473 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.520377 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.521539 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.522050 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.522170 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.522193 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.522412 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.522462 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.522622 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.522709 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.522878 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.523379 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.523456 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.523546 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.523784 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.523923 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.524295 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.524387 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.524539 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.525148 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.526313 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.526515 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.526654 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.526762 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.526810 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.527055 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.527229 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.527294 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.527135 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.528029 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.528404 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.528552 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.528567 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.528812 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.529065 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.529375 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.529697 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.530238 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.530213 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.530606 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.530971 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.531449 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.531877 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.532341 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.532384 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.532401 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.532524 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.532650 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.532858 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.533154 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.533226 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.533292 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.533366 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:30Z","lastTransitionTime":"2026-01-26T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.533146 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.533331 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.533514 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.533749 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.533833 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.533860 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.534149 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.534250 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.534414 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.535126 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.535250 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.534974 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.535494 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.535511 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.535749 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.535768 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:31.035745619 +0000 UTC m=+75.953339965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.535866 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.536143 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.536606 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.536611 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.537061 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.537110 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.537400 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.537516 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.537740 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.538184 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.538500 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.538855 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.538908 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.539180 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.539457 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.540026 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.540594 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.541206 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.541239 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.541379 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.541411 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.541661 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.541814 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.542131 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.542265 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd7339a-991d-4b65-8e8c-d3b049e9fa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be5bcbd76c10288ba86ec209af691e631a5c24d4f596b8b2a22be27a2e5b6026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.542571 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.542608 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.542710 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.543694 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.543913 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.543964 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.544114 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.544166 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.544196 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.544235 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.544420 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ec13f4fa-c252-4f6a-9a31-43f70366ae48-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-kcwjn\" (UID: \"ec13f4fa-c252-4f6a-9a31-43f70366ae48\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.544468 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75l2g\" (UniqueName: \"kubernetes.io/projected/2e5342d5-2d0c-458d-94b7-25c802ce298a-kube-api-access-75l2g\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.544706 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/65e3191d-a6c4-4983-aa24-9f03af38c82b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4vppd\" (UID: \"65e3191d-a6c4-4983-aa24-9f03af38c82b\") " pod="openshift-multus/multus-additional-cni-plugins-4vppd" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.544737 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-system-cni-dir\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.544764 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-os-release\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.544833 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7d907601-1852-43f9-8a70-ef4e71351e81-rootfs\") pod \"machine-config-daemon-94c4c\" (UID: \"7d907601-1852-43f9-8a70-ef4e71351e81\") " pod="openshift-machine-config-operator/machine-config-daemon-94c4c" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.545034 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.545169 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.545311 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.545627 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.544920 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-host-var-lib-cni-multus\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.545751 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.545821 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2e5342d5-2d0c-458d-94b7-25c802ce298a-cni-binary-copy\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.545929 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-host-var-lib-kubelet\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.546004 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/65e3191d-a6c4-4983-aa24-9f03af38c82b-cnibin\") pod \"multus-additional-cni-plugins-4vppd\" (UID: \"65e3191d-a6c4-4983-aa24-9f03af38c82b\") " pod="openshift-multus/multus-additional-cni-plugins-4vppd" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.546011 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.546031 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/65e3191d-a6c4-4983-aa24-9f03af38c82b-cni-binary-copy\") pod \"multus-additional-cni-plugins-4vppd\" (UID: \"65e3191d-a6c4-4983-aa24-9f03af38c82b\") " pod="openshift-multus/multus-additional-cni-plugins-4vppd" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.546074 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mzkl\" (UniqueName: \"kubernetes.io/projected/7d907601-1852-43f9-8a70-ef4e71351e81-kube-api-access-5mzkl\") pod \"machine-config-daemon-94c4c\" (UID: \"7d907601-1852-43f9-8a70-ef4e71351e81\") " pod="openshift-machine-config-operator/machine-config-daemon-94c4c" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.546097 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-host-run-netns\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.546136 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/65e3191d-a6c4-4983-aa24-9f03af38c82b-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-4vppd\" (UID: \"65e3191d-a6c4-4983-aa24-9f03af38c82b\") " pod="openshift-multus/multus-additional-cni-plugins-4vppd" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.546174 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb77l\" (UniqueName: \"kubernetes.io/projected/65e3191d-a6c4-4983-aa24-9f03af38c82b-kube-api-access-wb77l\") pod \"multus-additional-cni-plugins-4vppd\" (UID: \"65e3191d-a6c4-4983-aa24-9f03af38c82b\") " pod="openshift-multus/multus-additional-cni-plugins-4vppd" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.546206 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.546217 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ec13f4fa-c252-4f6a-9a31-43f70366ae48-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-kcwjn\" (UID: \"ec13f4fa-c252-4f6a-9a31-43f70366ae48\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.546314 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.546331 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2e5342d5-2d0c-458d-94b7-25c802ce298a-multus-daemon-config\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.546460 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.546486 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-host-run-multus-certs\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.546512 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-etc-kubernetes\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.546547 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7d907601-1852-43f9-8a70-ef4e71351e81-mcd-auth-proxy-config\") pod \"machine-config-daemon-94c4c\" (UID: \"7d907601-1852-43f9-8a70-ef4e71351e81\") " pod="openshift-machine-config-operator/machine-config-daemon-94c4c" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.546568 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ec13f4fa-c252-4f6a-9a31-43f70366ae48-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-kcwjn\" (UID: \"ec13f4fa-c252-4f6a-9a31-43f70366ae48\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.546593 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/65e3191d-a6c4-4983-aa24-9f03af38c82b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4vppd\" (UID: \"65e3191d-a6c4-4983-aa24-9f03af38c82b\") " pod="openshift-multus/multus-additional-cni-plugins-4vppd" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.546795 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.546964 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-multus-cni-dir\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.546986 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.546996 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.547013 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-cnibin\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.547068 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm2qk\" (UniqueName: \"kubernetes.io/projected/ec13f4fa-c252-4f6a-9a31-43f70366ae48-kube-api-access-nm2qk\") pod \"ovnkube-control-plane-57b78d8988-kcwjn\" (UID: \"ec13f4fa-c252-4f6a-9a31-43f70366ae48\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.547145 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.547330 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-host-run-k8s-cni-cncf-io\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.547367 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-host-var-lib-cni-bin\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.547406 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-multus-conf-dir\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.547427 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7d907601-1852-43f9-8a70-ef4e71351e81-proxy-tls\") pod \"machine-config-daemon-94c4c\" (UID: \"7d907601-1852-43f9-8a70-ef4e71351e81\") " pod="openshift-machine-config-operator/machine-config-daemon-94c4c" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.547454 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/65e3191d-a6c4-4983-aa24-9f03af38c82b-system-cni-dir\") pod \"multus-additional-cni-plugins-4vppd\" (UID: \"65e3191d-a6c4-4983-aa24-9f03af38c82b\") " pod="openshift-multus/multus-additional-cni-plugins-4vppd" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.547476 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/65e3191d-a6c4-4983-aa24-9f03af38c82b-os-release\") pod \"multus-additional-cni-plugins-4vppd\" (UID: \"65e3191d-a6c4-4983-aa24-9f03af38c82b\") " pod="openshift-multus/multus-additional-cni-plugins-4vppd" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.547593 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.547678 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.547956 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.548122 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.548616 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.548614 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.549056 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.547532 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-multus-socket-dir-parent\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.549127 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.549148 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-hostroot\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.549563 5107 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.549597 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.549613 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.549629 5107 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.549627 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.549645 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.549709 5107 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.549736 5107 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.549755 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.549769 5107 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.549785 5107 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.549803 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.549818 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.549835 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.549849 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.549864 5107 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.549898 5107 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.549920 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.549935 5107 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.549950 5107 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.549964 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.549982 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.549979 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.549998 5107 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550018 5107 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550023 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550033 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550085 5107 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550104 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550119 5107 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550125 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550135 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550146 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550167 5107 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550182 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550197 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550208 5107 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550219 5107 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550230 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550242 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550253 5107 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550263 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550274 5107 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550175 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550285 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550285 5107 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550214 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550386 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550445 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550496 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550353 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550570 5107 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550588 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550631 5107 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550648 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550664 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550679 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550692 5107 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550715 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550729 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550730 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550747 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550765 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550780 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550793 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550806 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550819 5107 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550832 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550845 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550914 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550857 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550951 5107 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550965 5107 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550978 5107 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.550992 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551005 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551018 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551031 5107 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551049 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551061 5107 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551077 5107 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551091 5107 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551103 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551115 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551128 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551140 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551153 5107 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551165 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551181 5107 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551193 5107 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551205 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551219 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551233 5107 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551245 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551238 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551259 5107 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551276 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551293 5107 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551305 5107 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551317 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551331 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551342 5107 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551355 5107 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551367 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551379 5107 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551390 5107 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551401 5107 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551412 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551425 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551436 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551447 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551462 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551473 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551487 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551499 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551512 5107 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551525 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551536 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551549 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551561 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551572 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551585 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551600 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551611 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551623 5107 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551635 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551647 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551661 5107 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551672 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551684 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551698 5107 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551711 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551729 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551741 5107 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551754 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551765 5107 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551783 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551794 5107 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551806 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551817 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551828 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551839 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551854 5107 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551867 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551897 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551911 5107 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551925 5107 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551937 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551951 5107 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551963 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551975 5107 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551987 5107 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.551999 5107 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.552010 5107 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.552021 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.552033 5107 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.552045 5107 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.552080 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.552096 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.552107 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.552119 5107 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.552133 5107 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.552145 5107 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.552159 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.552170 5107 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.552181 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.552192 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.552203 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.552220 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.552233 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.552245 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.552256 5107 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.552269 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.552281 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.552293 5107 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.552308 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.552320 5107 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.552333 5107 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.553765 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.553764 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.554319 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.557153 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.557294 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.557430 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.557855 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.557927 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.557922 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.558195 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.558725 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.558733 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.558776 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.558844 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.559102 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.559130 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.559487 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.559556 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.559689 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.560921 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.561043 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.561714 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.563078 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.563871 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.564019 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.566495 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.571802 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.574257 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.574498 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.576749 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.577340 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.577739 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.578809 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.578935 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.578983 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.579205 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.579319 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.579856 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.581434 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.582467 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.584173 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.595647 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.596793 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.630752 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.635757 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d12cfb26-8718-4def-8f36-c7eaa12bc463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nvznv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.638177 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.638317 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.638360 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.638375 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.638399 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.638416 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:30Z","lastTransitionTime":"2026-01-26T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.645743 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wbn74" event={"ID":"65e0e338-0636-411c-ac3c-9972beecf25b","Type":"ContainerStarted","Data":"e16f162685de7c566c8ef0a625d9e87e2d784d99a1cc2e21b9398bb112c8d304"} Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.646911 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-p96sx" event={"ID":"4f6f097f-b642-4bc7-ae13-b78dad78b73e","Type":"ContainerStarted","Data":"0922df9d09ca0ed17e1453a634a259dac42adc2fbfbf8c8c68009bdf28a4c651"} Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.650617 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-f2mpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e5342d5-2d0c-458d-94b7-25c802ce298a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75l2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f2mpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.651367 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"8a3400726b4c1200e31ba4f938a481b2376047d69194d734e38b53c115970228"} Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.651557 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:30 crc kubenswrapper[5107]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 26 00:10:30 crc kubenswrapper[5107]: set -uo pipefail Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 26 00:10:30 crc kubenswrapper[5107]: HOSTS_FILE="/etc/hosts" Jan 26 00:10:30 crc kubenswrapper[5107]: TEMP_FILE="/tmp/hosts.tmp" Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: # Make a temporary file with the old hosts file's attributes. Jan 26 00:10:30 crc kubenswrapper[5107]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 26 00:10:30 crc kubenswrapper[5107]: echo "Failed to preserve hosts file. Exiting." Jan 26 00:10:30 crc kubenswrapper[5107]: exit 1 Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: while true; do Jan 26 00:10:30 crc kubenswrapper[5107]: declare -A svc_ips Jan 26 00:10:30 crc kubenswrapper[5107]: for svc in "${services[@]}"; do Jan 26 00:10:30 crc kubenswrapper[5107]: # Fetch service IP from cluster dns if present. We make several tries Jan 26 00:10:30 crc kubenswrapper[5107]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 26 00:10:30 crc kubenswrapper[5107]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 26 00:10:30 crc kubenswrapper[5107]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 26 00:10:30 crc kubenswrapper[5107]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 26 00:10:30 crc kubenswrapper[5107]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 26 00:10:30 crc kubenswrapper[5107]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 26 00:10:30 crc kubenswrapper[5107]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 26 00:10:30 crc kubenswrapper[5107]: for i in ${!cmds[*]} Jan 26 00:10:30 crc kubenswrapper[5107]: do Jan 26 00:10:30 crc kubenswrapper[5107]: ips=($(eval "${cmds[i]}")) Jan 26 00:10:30 crc kubenswrapper[5107]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 26 00:10:30 crc kubenswrapper[5107]: svc_ips["${svc}"]="${ips[@]}" Jan 26 00:10:30 crc kubenswrapper[5107]: break Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: done Jan 26 00:10:30 crc kubenswrapper[5107]: done Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: # Update /etc/hosts only if we get valid service IPs Jan 26 00:10:30 crc kubenswrapper[5107]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 26 00:10:30 crc kubenswrapper[5107]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 26 00:10:30 crc kubenswrapper[5107]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 26 00:10:30 crc kubenswrapper[5107]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 26 00:10:30 crc kubenswrapper[5107]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 26 00:10:30 crc kubenswrapper[5107]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 26 00:10:30 crc kubenswrapper[5107]: sleep 60 & wait Jan 26 00:10:30 crc kubenswrapper[5107]: continue Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: # Append resolver entries for services Jan 26 00:10:30 crc kubenswrapper[5107]: rc=0 Jan 26 00:10:30 crc kubenswrapper[5107]: for svc in "${!svc_ips[@]}"; do Jan 26 00:10:30 crc kubenswrapper[5107]: for ip in ${svc_ips[${svc}]}; do Jan 26 00:10:30 crc kubenswrapper[5107]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 26 00:10:30 crc kubenswrapper[5107]: done Jan 26 00:10:30 crc kubenswrapper[5107]: done Jan 26 00:10:30 crc kubenswrapper[5107]: if [[ $rc -ne 0 ]]; then Jan 26 00:10:30 crc kubenswrapper[5107]: sleep 60 & wait Jan 26 00:10:30 crc kubenswrapper[5107]: continue Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 26 00:10:30 crc kubenswrapper[5107]: # Replace /etc/hosts with our modified version if needed Jan 26 00:10:30 crc kubenswrapper[5107]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 26 00:10:30 crc kubenswrapper[5107]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: sleep 60 & wait Jan 26 00:10:30 crc kubenswrapper[5107]: unset svc_ips Jan 26 00:10:30 crc kubenswrapper[5107]: done Jan 26 00:10:30 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rnj62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-wbn74_openshift-dns(65e0e338-0636-411c-ac3c-9972beecf25b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:30 crc kubenswrapper[5107]: > logger="UnhandledError" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.651793 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:30 crc kubenswrapper[5107]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 26 00:10:30 crc kubenswrapper[5107]: while [ true ]; Jan 26 00:10:30 crc kubenswrapper[5107]: do Jan 26 00:10:30 crc kubenswrapper[5107]: for f in $(ls /tmp/serviceca); do Jan 26 00:10:30 crc kubenswrapper[5107]: echo $f Jan 26 00:10:30 crc kubenswrapper[5107]: ca_file_path="/tmp/serviceca/${f}" Jan 26 00:10:30 crc kubenswrapper[5107]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 26 00:10:30 crc kubenswrapper[5107]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 26 00:10:30 crc kubenswrapper[5107]: if [ -e "${reg_dir_path}" ]; then Jan 26 00:10:30 crc kubenswrapper[5107]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 26 00:10:30 crc kubenswrapper[5107]: else Jan 26 00:10:30 crc kubenswrapper[5107]: mkdir $reg_dir_path Jan 26 00:10:30 crc kubenswrapper[5107]: cp $ca_file_path $reg_dir_path/ca.crt Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: done Jan 26 00:10:30 crc kubenswrapper[5107]: for d in $(ls /etc/docker/certs.d); do Jan 26 00:10:30 crc kubenswrapper[5107]: echo $d Jan 26 00:10:30 crc kubenswrapper[5107]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 26 00:10:30 crc kubenswrapper[5107]: reg_conf_path="/tmp/serviceca/${dp}" Jan 26 00:10:30 crc kubenswrapper[5107]: if [ ! -e "${reg_conf_path}" ]; then Jan 26 00:10:30 crc kubenswrapper[5107]: rm -rf /etc/docker/certs.d/$d Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: done Jan 26 00:10:30 crc kubenswrapper[5107]: sleep 60 & wait ${!} Jan 26 00:10:30 crc kubenswrapper[5107]: done Jan 26 00:10:30 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ptwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-p96sx_openshift-image-registry(4f6f097f-b642-4bc7-ae13-b78dad78b73e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:30 crc kubenswrapper[5107]: > logger="UnhandledError" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.652785 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/65e3191d-a6c4-4983-aa24-9f03af38c82b-cni-binary-copy\") pod \"multus-additional-cni-plugins-4vppd\" (UID: \"65e3191d-a6c4-4983-aa24-9f03af38c82b\") " pod="openshift-multus/multus-additional-cni-plugins-4vppd" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.653442 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5mzkl\" (UniqueName: \"kubernetes.io/projected/7d907601-1852-43f9-8a70-ef4e71351e81-kube-api-access-5mzkl\") pod \"machine-config-daemon-94c4c\" (UID: \"7d907601-1852-43f9-8a70-ef4e71351e81\") " pod="openshift-machine-config-operator/machine-config-daemon-94c4c" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.653480 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-host-run-netns\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.653531 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/65e3191d-a6c4-4983-aa24-9f03af38c82b-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-4vppd\" (UID: \"65e3191d-a6c4-4983-aa24-9f03af38c82b\") " pod="openshift-multus/multus-additional-cni-plugins-4vppd" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.653561 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wb77l\" (UniqueName: \"kubernetes.io/projected/65e3191d-a6c4-4983-aa24-9f03af38c82b-kube-api-access-wb77l\") pod \"multus-additional-cni-plugins-4vppd\" (UID: \"65e3191d-a6c4-4983-aa24-9f03af38c82b\") " pod="openshift-multus/multus-additional-cni-plugins-4vppd" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.653620 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ec13f4fa-c252-4f6a-9a31-43f70366ae48-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-kcwjn\" (UID: \"ec13f4fa-c252-4f6a-9a31-43f70366ae48\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.653638 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-host-run-netns\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.653654 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2e5342d5-2d0c-458d-94b7-25c802ce298a-multus-daemon-config\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.653677 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-host-run-multus-certs\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.653701 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-etc-kubernetes\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.653730 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7d907601-1852-43f9-8a70-ef4e71351e81-mcd-auth-proxy-config\") pod \"machine-config-daemon-94c4c\" (UID: \"7d907601-1852-43f9-8a70-ef4e71351e81\") " pod="openshift-machine-config-operator/machine-config-daemon-94c4c" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.653751 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ec13f4fa-c252-4f6a-9a31-43f70366ae48-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-kcwjn\" (UID: \"ec13f4fa-c252-4f6a-9a31-43f70366ae48\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.653782 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/65e3191d-a6c4-4983-aa24-9f03af38c82b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4vppd\" (UID: \"65e3191d-a6c4-4983-aa24-9f03af38c82b\") " pod="openshift-multus/multus-additional-cni-plugins-4vppd" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.653806 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/65e3191d-a6c4-4983-aa24-9f03af38c82b-cni-binary-copy\") pod \"multus-additional-cni-plugins-4vppd\" (UID: \"65e3191d-a6c4-4983-aa24-9f03af38c82b\") " pod="openshift-multus/multus-additional-cni-plugins-4vppd" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.653812 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-multus-cni-dir\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.653870 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-cnibin\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.653934 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nm2qk\" (UniqueName: \"kubernetes.io/projected/ec13f4fa-c252-4f6a-9a31-43f70366ae48-kube-api-access-nm2qk\") pod \"ovnkube-control-plane-57b78d8988-kcwjn\" (UID: \"ec13f4fa-c252-4f6a-9a31-43f70366ae48\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.653954 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-multus-cni-dir\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.653965 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-host-run-k8s-cni-cncf-io\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.653988 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-host-var-lib-cni-bin\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.654007 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-multus-conf-dir\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.654592 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7d907601-1852-43f9-8a70-ef4e71351e81-proxy-tls\") pod \"machine-config-daemon-94c4c\" (UID: \"7d907601-1852-43f9-8a70-ef4e71351e81\") " pod="openshift-machine-config-operator/machine-config-daemon-94c4c" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.654620 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/65e3191d-a6c4-4983-aa24-9f03af38c82b-system-cni-dir\") pod \"multus-additional-cni-plugins-4vppd\" (UID: \"65e3191d-a6c4-4983-aa24-9f03af38c82b\") " pod="openshift-multus/multus-additional-cni-plugins-4vppd" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.654640 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/65e3191d-a6c4-4983-aa24-9f03af38c82b-os-release\") pod \"multus-additional-cni-plugins-4vppd\" (UID: \"65e3191d-a6c4-4983-aa24-9f03af38c82b\") " pod="openshift-multus/multus-additional-cni-plugins-4vppd" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.654682 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-multus-socket-dir-parent\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.654700 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-hostroot\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.654778 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-host-run-multus-certs\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.654836 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/65e3191d-a6c4-4983-aa24-9f03af38c82b-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-4vppd\" (UID: \"65e3191d-a6c4-4983-aa24-9f03af38c82b\") " pod="openshift-multus/multus-additional-cni-plugins-4vppd" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.654938 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-p96sx" podUID="4f6f097f-b642-4bc7-ae13-b78dad78b73e" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.654991 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-etc-kubernetes\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.655243 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:30 crc kubenswrapper[5107]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 26 00:10:30 crc kubenswrapper[5107]: set -o allexport Jan 26 00:10:30 crc kubenswrapper[5107]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 26 00:10:30 crc kubenswrapper[5107]: source /etc/kubernetes/apiserver-url.env Jan 26 00:10:30 crc kubenswrapper[5107]: else Jan 26 00:10:30 crc kubenswrapper[5107]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 26 00:10:30 crc kubenswrapper[5107]: exit 1 Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 26 00:10:30 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:30 crc kubenswrapper[5107]: > logger="UnhandledError" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.655774 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7d907601-1852-43f9-8a70-ef4e71351e81-mcd-auth-proxy-config\") pod \"machine-config-daemon-94c4c\" (UID: \"7d907601-1852-43f9-8a70-ef4e71351e81\") " pod="openshift-machine-config-operator/machine-config-daemon-94c4c" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.654792 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-multus-socket-dir-parent\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.655798 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/65e3191d-a6c4-4983-aa24-9f03af38c82b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4vppd\" (UID: \"65e3191d-a6c4-4983-aa24-9f03af38c82b\") " pod="openshift-multus/multus-additional-cni-plugins-4vppd" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.655850 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-wbn74" podUID="65e0e338-0636-411c-ac3c-9972beecf25b" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.656509 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ec13f4fa-c252-4f6a-9a31-43f70366ae48-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-kcwjn\" (UID: \"ec13f4fa-c252-4f6a-9a31-43f70366ae48\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.656644 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.656651 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2e5342d5-2d0c-458d-94b7-25c802ce298a-multus-daemon-config\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.656730 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.657074 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-cnibin\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.657082 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-host-var-lib-cni-bin\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.657110 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-multus-conf-dir\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.657101 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-hostroot\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.657253 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"2aa6430cf68dbef7e70de558d51909bdde3150a7e4e38c9f582f6b1d0e2d3c61"} Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.657800 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/65e3191d-a6c4-4983-aa24-9f03af38c82b-system-cni-dir\") pod \"multus-additional-cni-plugins-4vppd\" (UID: \"65e3191d-a6c4-4983-aa24-9f03af38c82b\") " pod="openshift-multus/multus-additional-cni-plugins-4vppd" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.657946 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-host-run-k8s-cni-cncf-io\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658004 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ec13f4fa-c252-4f6a-9a31-43f70366ae48-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-kcwjn\" (UID: \"ec13f4fa-c252-4f6a-9a31-43f70366ae48\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658046 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-75l2g\" (UniqueName: \"kubernetes.io/projected/2e5342d5-2d0c-458d-94b7-25c802ce298a-kube-api-access-75l2g\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658082 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/65e3191d-a6c4-4983-aa24-9f03af38c82b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4vppd\" (UID: \"65e3191d-a6c4-4983-aa24-9f03af38c82b\") " pod="openshift-multus/multus-additional-cni-plugins-4vppd" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658115 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-system-cni-dir\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658142 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-os-release\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658181 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/65e3191d-a6c4-4983-aa24-9f03af38c82b-os-release\") pod \"multus-additional-cni-plugins-4vppd\" (UID: \"65e3191d-a6c4-4983-aa24-9f03af38c82b\") " pod="openshift-multus/multus-additional-cni-plugins-4vppd" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658203 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7d907601-1852-43f9-8a70-ef4e71351e81-rootfs\") pod \"machine-config-daemon-94c4c\" (UID: \"7d907601-1852-43f9-8a70-ef4e71351e81\") " pod="openshift-machine-config-operator/machine-config-daemon-94c4c" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658242 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-host-var-lib-cni-multus\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658281 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2e5342d5-2d0c-458d-94b7-25c802ce298a-cni-binary-copy\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658376 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-host-var-lib-kubelet\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658509 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/65e3191d-a6c4-4983-aa24-9f03af38c82b-cnibin\") pod \"multus-additional-cni-plugins-4vppd\" (UID: \"65e3191d-a6c4-4983-aa24-9f03af38c82b\") " pod="openshift-multus/multus-additional-cni-plugins-4vppd" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658634 5107 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658654 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658665 5107 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658677 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658688 5107 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658698 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658708 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658719 5107 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658731 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658744 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658755 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658768 5107 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658780 5107 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658790 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658800 5107 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658810 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658819 5107 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658830 5107 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658840 5107 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658850 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658861 5107 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658872 5107 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658909 5107 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658919 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658929 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658940 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658950 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658963 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658974 5107 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658985 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658997 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659008 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659018 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659028 5107 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659041 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659050 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659062 5107 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659071 5107 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659081 5107 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659090 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659101 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659119 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659136 5107 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659149 5107 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659162 5107 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659174 5107 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659189 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659199 5107 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659209 5107 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659221 5107 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659230 5107 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659238 5107 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659251 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659263 5107 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659274 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659283 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659292 5107 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659301 5107 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.658751 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/65e3191d-a6c4-4983-aa24-9f03af38c82b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4vppd\" (UID: \"65e3191d-a6c4-4983-aa24-9f03af38c82b\") " pod="openshift-multus/multus-additional-cni-plugins-4vppd" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659324 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ec13f4fa-c252-4f6a-9a31-43f70366ae48-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-kcwjn\" (UID: \"ec13f4fa-c252-4f6a-9a31-43f70366ae48\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659367 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7d907601-1852-43f9-8a70-ef4e71351e81-rootfs\") pod \"machine-config-daemon-94c4c\" (UID: \"7d907601-1852-43f9-8a70-ef4e71351e81\") " pod="openshift-machine-config-operator/machine-config-daemon-94c4c" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659394 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-system-cni-dir\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659397 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-host-var-lib-cni-multus\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.659749 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-os-release\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.660101 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2e5342d5-2d0c-458d-94b7-25c802ce298a-host-var-lib-kubelet\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.660158 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/65e3191d-a6c4-4983-aa24-9f03af38c82b-cnibin\") pod \"multus-additional-cni-plugins-4vppd\" (UID: \"65e3191d-a6c4-4983-aa24-9f03af38c82b\") " pod="openshift-multus/multus-additional-cni-plugins-4vppd" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.660198 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2e5342d5-2d0c-458d-94b7-25c802ce298a-cni-binary-copy\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.660199 5107 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.661368 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.662513 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"f2b840ec44d71564ca02c56573d38daa2d674c45476fa59c2320ce0fd023dea1"} Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.663701 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" event={"ID":"d12cfb26-8718-4def-8f36-c7eaa12bc463","Type":"ContainerStarted","Data":"5a71931c9f6b4da462548b6468f1ae63256b59a3616870102e815d45c9040a1c"} Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.665134 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:30 crc kubenswrapper[5107]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 26 00:10:30 crc kubenswrapper[5107]: apiVersion: v1 Jan 26 00:10:30 crc kubenswrapper[5107]: clusters: Jan 26 00:10:30 crc kubenswrapper[5107]: - cluster: Jan 26 00:10:30 crc kubenswrapper[5107]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 26 00:10:30 crc kubenswrapper[5107]: server: https://api-int.crc.testing:6443 Jan 26 00:10:30 crc kubenswrapper[5107]: name: default-cluster Jan 26 00:10:30 crc kubenswrapper[5107]: contexts: Jan 26 00:10:30 crc kubenswrapper[5107]: - context: Jan 26 00:10:30 crc kubenswrapper[5107]: cluster: default-cluster Jan 26 00:10:30 crc kubenswrapper[5107]: namespace: default Jan 26 00:10:30 crc kubenswrapper[5107]: user: default-auth Jan 26 00:10:30 crc kubenswrapper[5107]: name: default-context Jan 26 00:10:30 crc kubenswrapper[5107]: current-context: default-context Jan 26 00:10:30 crc kubenswrapper[5107]: kind: Config Jan 26 00:10:30 crc kubenswrapper[5107]: preferences: {} Jan 26 00:10:30 crc kubenswrapper[5107]: users: Jan 26 00:10:30 crc kubenswrapper[5107]: - name: default-auth Jan 26 00:10:30 crc kubenswrapper[5107]: user: Jan 26 00:10:30 crc kubenswrapper[5107]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 26 00:10:30 crc kubenswrapper[5107]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 26 00:10:30 crc kubenswrapper[5107]: EOF Jan 26 00:10:30 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bm9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-nvznv_openshift-ovn-kubernetes(d12cfb26-8718-4def-8f36-c7eaa12bc463): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:30 crc kubenswrapper[5107]: > logger="UnhandledError" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.665541 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:30 crc kubenswrapper[5107]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 26 00:10:30 crc kubenswrapper[5107]: if [[ -f "/env/_master" ]]; then Jan 26 00:10:30 crc kubenswrapper[5107]: set -o allexport Jan 26 00:10:30 crc kubenswrapper[5107]: source "/env/_master" Jan 26 00:10:30 crc kubenswrapper[5107]: set +o allexport Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 26 00:10:30 crc kubenswrapper[5107]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 26 00:10:30 crc kubenswrapper[5107]: ho_enable="--enable-hybrid-overlay" Jan 26 00:10:30 crc kubenswrapper[5107]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 26 00:10:30 crc kubenswrapper[5107]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 26 00:10:30 crc kubenswrapper[5107]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 26 00:10:30 crc kubenswrapper[5107]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 26 00:10:30 crc kubenswrapper[5107]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 26 00:10:30 crc kubenswrapper[5107]: --webhook-host=127.0.0.1 \ Jan 26 00:10:30 crc kubenswrapper[5107]: --webhook-port=9743 \ Jan 26 00:10:30 crc kubenswrapper[5107]: ${ho_enable} \ Jan 26 00:10:30 crc kubenswrapper[5107]: --enable-interconnect \ Jan 26 00:10:30 crc kubenswrapper[5107]: --disable-approver \ Jan 26 00:10:30 crc kubenswrapper[5107]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 26 00:10:30 crc kubenswrapper[5107]: --wait-for-kubernetes-api=200s \ Jan 26 00:10:30 crc kubenswrapper[5107]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 26 00:10:30 crc kubenswrapper[5107]: --loglevel="${LOGLEVEL}" Jan 26 00:10:30 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:30 crc kubenswrapper[5107]: > logger="UnhandledError" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.665800 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"870e1d1f-d5de-4cb0-afd3-e32ee3e21ad9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1c2645c2d7f91e355504de88c19902bd7091a30b8fb1e6bffe3bd643d9ae87e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6cd1e8970c9bd97f174884ae8760b3f67982935515109cac7fc2423d03e2cdc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c4c9a487362af2080d699cfb3c25b37fee4ea7ee71fe4c120513c8a93e345bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.666564 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.673079 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7d907601-1852-43f9-8a70-ef4e71351e81-proxy-tls\") pod \"machine-config-daemon-94c4c\" (UID: \"7d907601-1852-43f9-8a70-ef4e71351e81\") " pod="openshift-machine-config-operator/machine-config-daemon-94c4c" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.673334 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:30 crc kubenswrapper[5107]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 26 00:10:30 crc kubenswrapper[5107]: if [[ -f "/env/_master" ]]; then Jan 26 00:10:30 crc kubenswrapper[5107]: set -o allexport Jan 26 00:10:30 crc kubenswrapper[5107]: source "/env/_master" Jan 26 00:10:30 crc kubenswrapper[5107]: set +o allexport Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 26 00:10:30 crc kubenswrapper[5107]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 26 00:10:30 crc kubenswrapper[5107]: --disable-webhook \ Jan 26 00:10:30 crc kubenswrapper[5107]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 26 00:10:30 crc kubenswrapper[5107]: --loglevel="${LOGLEVEL}" Jan 26 00:10:30 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:30 crc kubenswrapper[5107]: > logger="UnhandledError" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.674563 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ec13f4fa-c252-4f6a-9a31-43f70366ae48-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-kcwjn\" (UID: \"ec13f4fa-c252-4f6a-9a31-43f70366ae48\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.674640 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.679423 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mzkl\" (UniqueName: \"kubernetes.io/projected/7d907601-1852-43f9-8a70-ef4e71351e81-kube-api-access-5mzkl\") pod \"machine-config-daemon-94c4c\" (UID: \"7d907601-1852-43f9-8a70-ef4e71351e81\") " pod="openshift-machine-config-operator/machine-config-daemon-94c4c" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.681438 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm2qk\" (UniqueName: \"kubernetes.io/projected/ec13f4fa-c252-4f6a-9a31-43f70366ae48-kube-api-access-nm2qk\") pod \"ovnkube-control-plane-57b78d8988-kcwjn\" (UID: \"ec13f4fa-c252-4f6a-9a31-43f70366ae48\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.681450 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb77l\" (UniqueName: \"kubernetes.io/projected/65e3191d-a6c4-4983-aa24-9f03af38c82b-kube-api-access-wb77l\") pod \"multus-additional-cni-plugins-4vppd\" (UID: \"65e3191d-a6c4-4983-aa24-9f03af38c82b\") " pod="openshift-multus/multus-additional-cni-plugins-4vppd" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.683577 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-4vppd" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.687345 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-75l2g\" (UniqueName: \"kubernetes.io/projected/2e5342d5-2d0c-458d-94b7-25c802ce298a-kube-api-access-75l2g\") pod \"multus-f2mpq\" (UID: \"2e5342d5-2d0c-458d-94b7-25c802ce298a\") " pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.691946 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-f2mpq" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.706254 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b926ca7e-55ee-4b84-a5c2-3eea448cf9c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://80635fc424a05f12a9bb60d0ceb42d4a25d7bbc065e69e32316354bfa3c1c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2986b22cb8ac794f2297b2bb06e60e4f85638acb9c56a9ccf8a86e5d42ae8251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://976d360799dd0382ad10776370c3db39c364353d2a4c9ffdd339503160e251db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1d272e8f9f86eb13c31a8613165562354adc102c6c7674464a48f4c72fc4a3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3d24145582663d59a87810736c9cba433c006ed3baf7391cf09c2341c5e6b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.722929 5107 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wb77l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-4vppd_openshift-multus(65e3191d-a6c4-4983-aa24-9f03af38c82b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.723091 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.723734 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:30 crc kubenswrapper[5107]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 26 00:10:30 crc kubenswrapper[5107]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 26 00:10:30 crc kubenswrapper[5107]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-75l2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-f2mpq_openshift-multus(2e5342d5-2d0c-458d-94b7-25c802ce298a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:30 crc kubenswrapper[5107]: > logger="UnhandledError" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.724914 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-4vppd" podUID="65e3191d-a6c4-4983-aa24-9f03af38c82b" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.725312 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-f2mpq" podUID="2e5342d5-2d0c-458d-94b7-25c802ce298a" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.727875 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.741896 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.741947 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.741960 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.741978 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.741993 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:30Z","lastTransitionTime":"2026-01-26T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.742556 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.746687 5107 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mzkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-94c4c_openshift-machine-config-operator(7d907601-1852-43f9-8a70-ef4e71351e81): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.750487 5107 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mzkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-94c4c_openshift-machine-config-operator(7d907601-1852-43f9-8a70-ef4e71351e81): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.751859 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.754524 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p96sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6f097f-b642-4bc7-ae13-b78dad78b73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ptwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p96sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.767483 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wbn74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e0e338-0636-411c-ac3c-9972beecf25b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnj62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wbn74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.784095 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"504c44df-fe93-44f1-bab1-0ea8b1eb3980\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://47fa690b41b05a971d8e2d25a105b0c873282b4794f352165354120564685e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c1d676a79dd2425942bd62e4d423f98509d8fbdce526ec4174c8f201faab13c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e6a9e0e1088ec6d6c55e9c40410af1e160ce01e045855d38afe83fae0f283ad1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cce459ad004254e8afec72b815e731aa25828326ffe317a8dd4ac064ffc744fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce459ad004254e8afec72b815e731aa25828326ffe317a8dd4ac064ffc744fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"message\\\":\\\"o:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0126 00:10:04.015346 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 00:10:04.015512 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0126 00:10:04.016590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-910484192/tls.crt::/tmp/serving-cert-910484192/tls.key\\\\\\\"\\\\nI0126 00:10:04.772936 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:10:04.776564 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:10:04.777520 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:10:04.777595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:10:04.777608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:10:04.782656 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 00:10:04.782703 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:10:04.782726 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:04.782734 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:04.782738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:10:04.782742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:10:04.782745 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:10:04.782749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:10:04.785948 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:10:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://77afb4ec3e1993d3627dfd57b2c724e127e0b709358c469f86fe32abae3a75a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.804360 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b63ba3-1e0f-4fc0-8c1f-0c667403148c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a32faca2b6b353b711ddefefc6c8849adfa0a7790893f7c1faa5a3f9d703fddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://322b4a3e2a376c541682895450ed098e45acabe88d84fda4adbc15c56d32ab5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ab1303ede901dbcb4161028c6937d2b8c3d5c9bed4e1b0e53f56f5f2d84ac85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e262e6f9f8205c48c94e191de6b6732c6294e9f794db6f66c90b561ec016e455\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.820177 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.837243 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.845841 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.845936 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.845950 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.845973 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.845984 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:30Z","lastTransitionTime":"2026-01-26T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.859966 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4vppd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e3191d-a6c4-4983-aa24-9f03af38c82b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4vppd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.868057 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.868118 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.868143 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-metrics-certs\") pod \"network-metrics-daemon-bdn4m\" (UID: \"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\") " pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.868164 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vmtjt\" (UniqueName: \"kubernetes.io/projected/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-kube-api-access-vmtjt\") pod \"network-metrics-daemon-bdn4m\" (UID: \"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\") " pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.868670 5107 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.868738 5107 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.868776 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-metrics-certs podName:93b5402e-3f3e-4e3b-8cf4-f919871d0c86 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:31.868751975 +0000 UTC m=+76.786346321 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-metrics-certs") pod "network-metrics-daemon-bdn4m" (UID: "93b5402e-3f3e-4e3b-8cf4-f919871d0c86") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.868682 5107 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.868875 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:32.868844678 +0000 UTC m=+77.786439214 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.868926 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:32.86891201 +0000 UTC m=+77.786506506 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.874318 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmtjt\" (UniqueName: \"kubernetes.io/projected/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-kube-api-access-vmtjt\") pod \"network-metrics-daemon-bdn4m\" (UID: \"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\") " pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.880851 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d907601-1852-43f9-8a70-ef4e71351e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94c4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.899309 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdn4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdn4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.915313 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec13f4fa-c252-4f6a-9a31-43f70366ae48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm2qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm2qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-kcwjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.936603 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.949079 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.949135 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.949148 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.949166 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.949178 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:30Z","lastTransitionTime":"2026-01-26T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.962142 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.968913 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.968957 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.969087 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.969112 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.969125 5107 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.969181 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:32.969161645 +0000 UTC m=+77.886755991 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.969477 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.969499 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.969509 5107 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.969544 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:32.969535336 +0000 UTC m=+77.887129682 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:30 crc kubenswrapper[5107]: W0126 00:10:30.978731 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec13f4fa_c252_4f6a_9a31_43f70366ae48.slice/crio-5fd9d33e7f51a3f529e4963e034176ddf70e34e2fdfa54f0ba68a3a217cae605 WatchSource:0}: Error finding container 5fd9d33e7f51a3f529e4963e034176ddf70e34e2fdfa54f0ba68a3a217cae605: Status 404 returned error can't find the container with id 5fd9d33e7f51a3f529e4963e034176ddf70e34e2fdfa54f0ba68a3a217cae605 Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.981197 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:30 crc kubenswrapper[5107]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 26 00:10:30 crc kubenswrapper[5107]: set -euo pipefail Jan 26 00:10:30 crc kubenswrapper[5107]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 26 00:10:30 crc kubenswrapper[5107]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 26 00:10:30 crc kubenswrapper[5107]: # As the secret mount is optional we must wait for the files to be present. Jan 26 00:10:30 crc kubenswrapper[5107]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 26 00:10:30 crc kubenswrapper[5107]: TS=$(date +%s) Jan 26 00:10:30 crc kubenswrapper[5107]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 26 00:10:30 crc kubenswrapper[5107]: HAS_LOGGED_INFO=0 Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: log_missing_certs(){ Jan 26 00:10:30 crc kubenswrapper[5107]: CUR_TS=$(date +%s) Jan 26 00:10:30 crc kubenswrapper[5107]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 26 00:10:30 crc kubenswrapper[5107]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 26 00:10:30 crc kubenswrapper[5107]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 26 00:10:30 crc kubenswrapper[5107]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 26 00:10:30 crc kubenswrapper[5107]: HAS_LOGGED_INFO=1 Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: } Jan 26 00:10:30 crc kubenswrapper[5107]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 26 00:10:30 crc kubenswrapper[5107]: log_missing_certs Jan 26 00:10:30 crc kubenswrapper[5107]: sleep 5 Jan 26 00:10:30 crc kubenswrapper[5107]: done Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 26 00:10:30 crc kubenswrapper[5107]: exec /usr/bin/kube-rbac-proxy \ Jan 26 00:10:30 crc kubenswrapper[5107]: --logtostderr \ Jan 26 00:10:30 crc kubenswrapper[5107]: --secure-listen-address=:9108 \ Jan 26 00:10:30 crc kubenswrapper[5107]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 26 00:10:30 crc kubenswrapper[5107]: --upstream=http://127.0.0.1:29108/ \ Jan 26 00:10:30 crc kubenswrapper[5107]: --tls-private-key-file=${TLS_PK} \ Jan 26 00:10:30 crc kubenswrapper[5107]: --tls-cert-file=${TLS_CERT} Jan 26 00:10:30 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nm2qk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-kcwjn_openshift-ovn-kubernetes(ec13f4fa-c252-4f6a-9a31-43f70366ae48): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:30 crc kubenswrapper[5107]: > logger="UnhandledError" Jan 26 00:10:30 crc kubenswrapper[5107]: I0126 00:10:30.981455 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p96sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6f097f-b642-4bc7-ae13-b78dad78b73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ptwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p96sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.984295 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:30 crc kubenswrapper[5107]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 26 00:10:30 crc kubenswrapper[5107]: if [[ -f "/env/_master" ]]; then Jan 26 00:10:30 crc kubenswrapper[5107]: set -o allexport Jan 26 00:10:30 crc kubenswrapper[5107]: source "/env/_master" Jan 26 00:10:30 crc kubenswrapper[5107]: set +o allexport Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: ovn_v4_join_subnet_opt= Jan 26 00:10:30 crc kubenswrapper[5107]: if [[ "" != "" ]]; then Jan 26 00:10:30 crc kubenswrapper[5107]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: ovn_v6_join_subnet_opt= Jan 26 00:10:30 crc kubenswrapper[5107]: if [[ "" != "" ]]; then Jan 26 00:10:30 crc kubenswrapper[5107]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: ovn_v4_transit_switch_subnet_opt= Jan 26 00:10:30 crc kubenswrapper[5107]: if [[ "" != "" ]]; then Jan 26 00:10:30 crc kubenswrapper[5107]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: ovn_v6_transit_switch_subnet_opt= Jan 26 00:10:30 crc kubenswrapper[5107]: if [[ "" != "" ]]; then Jan 26 00:10:30 crc kubenswrapper[5107]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: dns_name_resolver_enabled_flag= Jan 26 00:10:30 crc kubenswrapper[5107]: if [[ "false" == "true" ]]; then Jan 26 00:10:30 crc kubenswrapper[5107]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: # This is needed so that converting clusters from GA to TP Jan 26 00:10:30 crc kubenswrapper[5107]: # will rollout control plane pods as well Jan 26 00:10:30 crc kubenswrapper[5107]: network_segmentation_enabled_flag= Jan 26 00:10:30 crc kubenswrapper[5107]: multi_network_enabled_flag= Jan 26 00:10:30 crc kubenswrapper[5107]: if [[ "true" == "true" ]]; then Jan 26 00:10:30 crc kubenswrapper[5107]: multi_network_enabled_flag="--enable-multi-network" Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: if [[ "true" == "true" ]]; then Jan 26 00:10:30 crc kubenswrapper[5107]: if [[ "true" != "true" ]]; then Jan 26 00:10:30 crc kubenswrapper[5107]: multi_network_enabled_flag="--enable-multi-network" Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: route_advertisements_enable_flag= Jan 26 00:10:30 crc kubenswrapper[5107]: if [[ "false" == "true" ]]; then Jan 26 00:10:30 crc kubenswrapper[5107]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: preconfigured_udn_addresses_enable_flag= Jan 26 00:10:30 crc kubenswrapper[5107]: if [[ "false" == "true" ]]; then Jan 26 00:10:30 crc kubenswrapper[5107]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: # Enable multi-network policy if configured (control-plane always full mode) Jan 26 00:10:30 crc kubenswrapper[5107]: multi_network_policy_enabled_flag= Jan 26 00:10:30 crc kubenswrapper[5107]: if [[ "false" == "true" ]]; then Jan 26 00:10:30 crc kubenswrapper[5107]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: # Enable admin network policy if configured (control-plane always full mode) Jan 26 00:10:30 crc kubenswrapper[5107]: admin_network_policy_enabled_flag= Jan 26 00:10:30 crc kubenswrapper[5107]: if [[ "true" == "true" ]]; then Jan 26 00:10:30 crc kubenswrapper[5107]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: if [ "shared" == "shared" ]; then Jan 26 00:10:30 crc kubenswrapper[5107]: gateway_mode_flags="--gateway-mode shared" Jan 26 00:10:30 crc kubenswrapper[5107]: elif [ "shared" == "local" ]; then Jan 26 00:10:30 crc kubenswrapper[5107]: gateway_mode_flags="--gateway-mode local" Jan 26 00:10:30 crc kubenswrapper[5107]: else Jan 26 00:10:30 crc kubenswrapper[5107]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 26 00:10:30 crc kubenswrapper[5107]: exit 1 Jan 26 00:10:30 crc kubenswrapper[5107]: fi Jan 26 00:10:30 crc kubenswrapper[5107]: Jan 26 00:10:30 crc kubenswrapper[5107]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 26 00:10:30 crc kubenswrapper[5107]: exec /usr/bin/ovnkube \ Jan 26 00:10:30 crc kubenswrapper[5107]: --enable-interconnect \ Jan 26 00:10:30 crc kubenswrapper[5107]: --init-cluster-manager "${K8S_NODE}" \ Jan 26 00:10:30 crc kubenswrapper[5107]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 26 00:10:30 crc kubenswrapper[5107]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 26 00:10:30 crc kubenswrapper[5107]: --metrics-bind-address "127.0.0.1:29108" \ Jan 26 00:10:30 crc kubenswrapper[5107]: --metrics-enable-pprof \ Jan 26 00:10:30 crc kubenswrapper[5107]: --metrics-enable-config-duration \ Jan 26 00:10:30 crc kubenswrapper[5107]: ${ovn_v4_join_subnet_opt} \ Jan 26 00:10:30 crc kubenswrapper[5107]: ${ovn_v6_join_subnet_opt} \ Jan 26 00:10:30 crc kubenswrapper[5107]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 26 00:10:30 crc kubenswrapper[5107]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 26 00:10:30 crc kubenswrapper[5107]: ${dns_name_resolver_enabled_flag} \ Jan 26 00:10:30 crc kubenswrapper[5107]: ${persistent_ips_enabled_flag} \ Jan 26 00:10:30 crc kubenswrapper[5107]: ${multi_network_enabled_flag} \ Jan 26 00:10:30 crc kubenswrapper[5107]: ${network_segmentation_enabled_flag} \ Jan 26 00:10:30 crc kubenswrapper[5107]: ${gateway_mode_flags} \ Jan 26 00:10:30 crc kubenswrapper[5107]: ${route_advertisements_enable_flag} \ Jan 26 00:10:30 crc kubenswrapper[5107]: ${preconfigured_udn_addresses_enable_flag} \ Jan 26 00:10:30 crc kubenswrapper[5107]: --enable-egress-ip=true \ Jan 26 00:10:30 crc kubenswrapper[5107]: --enable-egress-firewall=true \ Jan 26 00:10:30 crc kubenswrapper[5107]: --enable-egress-qos=true \ Jan 26 00:10:30 crc kubenswrapper[5107]: --enable-egress-service=true \ Jan 26 00:10:30 crc kubenswrapper[5107]: --enable-multicast \ Jan 26 00:10:30 crc kubenswrapper[5107]: --enable-multi-external-gateway=true \ Jan 26 00:10:30 crc kubenswrapper[5107]: ${multi_network_policy_enabled_flag} \ Jan 26 00:10:30 crc kubenswrapper[5107]: ${admin_network_policy_enabled_flag} Jan 26 00:10:30 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nm2qk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-kcwjn_openshift-ovn-kubernetes(ec13f4fa-c252-4f6a-9a31-43f70366ae48): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:30 crc kubenswrapper[5107]: > logger="UnhandledError" Jan 26 00:10:30 crc kubenswrapper[5107]: E0126 00:10:30.985737 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" podUID="ec13f4fa-c252-4f6a-9a31-43f70366ae48" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.035396 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wbn74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e0e338-0636-411c-ac3c-9972beecf25b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnj62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wbn74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.051654 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.051733 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.051770 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.051790 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.051805 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.063377 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"504c44df-fe93-44f1-bab1-0ea8b1eb3980\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://47fa690b41b05a971d8e2d25a105b0c873282b4794f352165354120564685e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c1d676a79dd2425942bd62e4d423f98509d8fbdce526ec4174c8f201faab13c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e6a9e0e1088ec6d6c55e9c40410af1e160ce01e045855d38afe83fae0f283ad1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cce459ad004254e8afec72b815e731aa25828326ffe317a8dd4ac064ffc744fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce459ad004254e8afec72b815e731aa25828326ffe317a8dd4ac064ffc744fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"message\\\":\\\"o:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0126 00:10:04.015346 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 00:10:04.015512 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0126 00:10:04.016590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-910484192/tls.crt::/tmp/serving-cert-910484192/tls.key\\\\\\\"\\\\nI0126 00:10:04.772936 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:10:04.776564 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:10:04.777520 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:10:04.777595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:10:04.777608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:10:04.782656 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 00:10:04.782703 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:10:04.782726 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:04.782734 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:04.782738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:10:04.782742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:10:04.782745 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:10:04.782749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:10:04.785948 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:10:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://77afb4ec3e1993d3627dfd57b2c724e127e0b709358c469f86fe32abae3a75a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.069549 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:31 crc kubenswrapper[5107]: E0126 00:10:31.070118 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:32.07009021 +0000 UTC m=+76.987684556 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.097415 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b63ba3-1e0f-4fc0-8c1f-0c667403148c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a32faca2b6b353b711ddefefc6c8849adfa0a7790893f7c1faa5a3f9d703fddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://322b4a3e2a376c541682895450ed098e45acabe88d84fda4adbc15c56d32ab5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ab1303ede901dbcb4161028c6937d2b8c3d5c9bed4e1b0e53f56f5f2d84ac85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e262e6f9f8205c48c94e191de6b6732c6294e9f794db6f66c90b561ec016e455\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.112649 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:31 crc kubenswrapper[5107]: E0126 00:10:31.113004 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.113335 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:31 crc kubenswrapper[5107]: E0126 00:10:31.113540 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.139185 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.155420 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.155493 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.155513 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.155542 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.155560 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.179919 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.220010 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4vppd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e3191d-a6c4-4983-aa24-9f03af38c82b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4vppd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.257756 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d907601-1852-43f9-8a70-ef4e71351e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94c4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.258921 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.259008 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.259026 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.259067 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.259081 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.295924 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdn4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdn4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.338446 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec13f4fa-c252-4f6a-9a31-43f70366ae48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm2qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm2qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-kcwjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.362468 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.362537 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.362548 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.362569 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.362581 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.374735 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd7339a-991d-4b65-8e8c-d3b049e9fa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be5bcbd76c10288ba86ec209af691e631a5c24d4f596b8b2a22be27a2e5b6026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.422362 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.459401 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.465584 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.465638 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.465649 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.465670 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.465682 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.512241 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d12cfb26-8718-4def-8f36-c7eaa12bc463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nvznv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.539971 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-f2mpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e5342d5-2d0c-458d-94b7-25c802ce298a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75l2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f2mpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.569440 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.569512 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.569526 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.569552 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.569571 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.578085 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"870e1d1f-d5de-4cb0-afd3-e32ee3e21ad9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1c2645c2d7f91e355504de88c19902bd7091a30b8fb1e6bffe3bd643d9ae87e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6cd1e8970c9bd97f174884ae8760b3f67982935515109cac7fc2423d03e2cdc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c4c9a487362af2080d699cfb3c25b37fee4ea7ee71fe4c120513c8a93e345bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.628075 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b926ca7e-55ee-4b84-a5c2-3eea448cf9c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://80635fc424a05f12a9bb60d0ceb42d4a25d7bbc065e69e32316354bfa3c1c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2986b22cb8ac794f2297b2bb06e60e4f85638acb9c56a9ccf8a86e5d42ae8251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://976d360799dd0382ad10776370c3db39c364353d2a4c9ffdd339503160e251db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1d272e8f9f86eb13c31a8613165562354adc102c6c7674464a48f4c72fc4a3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3d24145582663d59a87810736c9cba433c006ed3baf7391cf09c2341c5e6b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.661306 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.668623 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" event={"ID":"7d907601-1852-43f9-8a70-ef4e71351e81","Type":"ContainerStarted","Data":"364bd0870d14f95e4b69793579ff35af42118e8730c6347fca24d6a52dd6b0e4"} Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.670083 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-f2mpq" event={"ID":"2e5342d5-2d0c-458d-94b7-25c802ce298a","Type":"ContainerStarted","Data":"d8acb9e1be49cbf0294957940362407d6d6d81f4f92f42598c33961b874d2ba4"} Jan 26 00:10:31 crc kubenswrapper[5107]: E0126 00:10:31.671732 5107 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mzkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-94c4c_openshift-machine-config-operator(7d907601-1852-43f9-8a70-ef4e71351e81): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.672058 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.672090 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.672101 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.672114 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.672125 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5107]: E0126 00:10:31.674531 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:31 crc kubenswrapper[5107]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 26 00:10:31 crc kubenswrapper[5107]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 26 00:10:31 crc kubenswrapper[5107]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-75l2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-f2mpq_openshift-multus(2e5342d5-2d0c-458d-94b7-25c802ce298a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:31 crc kubenswrapper[5107]: > logger="UnhandledError" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.674722 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 26 00:10:31 crc kubenswrapper[5107]: E0126 00:10:31.675713 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-f2mpq" podUID="2e5342d5-2d0c-458d-94b7-25c802ce298a" Jan 26 00:10:31 crc kubenswrapper[5107]: E0126 00:10:31.675978 5107 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mzkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-94c4c_openshift-machine-config-operator(7d907601-1852-43f9-8a70-ef4e71351e81): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 26 00:10:31 crc kubenswrapper[5107]: E0126 00:10:31.679009 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.679041 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"8771a49a10f3f3e07f25647aa9c52ba74dae813bb12b4e2d0f80e6996482bd1d"} Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.679931 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.681161 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" event={"ID":"ec13f4fa-c252-4f6a-9a31-43f70366ae48","Type":"ContainerStarted","Data":"5fd9d33e7f51a3f529e4963e034176ddf70e34e2fdfa54f0ba68a3a217cae605"} Jan 26 00:10:31 crc kubenswrapper[5107]: E0126 00:10:31.683533 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:31 crc kubenswrapper[5107]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 26 00:10:31 crc kubenswrapper[5107]: set -euo pipefail Jan 26 00:10:31 crc kubenswrapper[5107]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 26 00:10:31 crc kubenswrapper[5107]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 26 00:10:31 crc kubenswrapper[5107]: # As the secret mount is optional we must wait for the files to be present. Jan 26 00:10:31 crc kubenswrapper[5107]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 26 00:10:31 crc kubenswrapper[5107]: TS=$(date +%s) Jan 26 00:10:31 crc kubenswrapper[5107]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 26 00:10:31 crc kubenswrapper[5107]: HAS_LOGGED_INFO=0 Jan 26 00:10:31 crc kubenswrapper[5107]: Jan 26 00:10:31 crc kubenswrapper[5107]: log_missing_certs(){ Jan 26 00:10:31 crc kubenswrapper[5107]: CUR_TS=$(date +%s) Jan 26 00:10:31 crc kubenswrapper[5107]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 26 00:10:31 crc kubenswrapper[5107]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 26 00:10:31 crc kubenswrapper[5107]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 26 00:10:31 crc kubenswrapper[5107]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 26 00:10:31 crc kubenswrapper[5107]: HAS_LOGGED_INFO=1 Jan 26 00:10:31 crc kubenswrapper[5107]: fi Jan 26 00:10:31 crc kubenswrapper[5107]: } Jan 26 00:10:31 crc kubenswrapper[5107]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 26 00:10:31 crc kubenswrapper[5107]: log_missing_certs Jan 26 00:10:31 crc kubenswrapper[5107]: sleep 5 Jan 26 00:10:31 crc kubenswrapper[5107]: done Jan 26 00:10:31 crc kubenswrapper[5107]: Jan 26 00:10:31 crc kubenswrapper[5107]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 26 00:10:31 crc kubenswrapper[5107]: exec /usr/bin/kube-rbac-proxy \ Jan 26 00:10:31 crc kubenswrapper[5107]: --logtostderr \ Jan 26 00:10:31 crc kubenswrapper[5107]: --secure-listen-address=:9108 \ Jan 26 00:10:31 crc kubenswrapper[5107]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 26 00:10:31 crc kubenswrapper[5107]: --upstream=http://127.0.0.1:29108/ \ Jan 26 00:10:31 crc kubenswrapper[5107]: --tls-private-key-file=${TLS_PK} \ Jan 26 00:10:31 crc kubenswrapper[5107]: --tls-cert-file=${TLS_CERT} Jan 26 00:10:31 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nm2qk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-kcwjn_openshift-ovn-kubernetes(ec13f4fa-c252-4f6a-9a31-43f70366ae48): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:31 crc kubenswrapper[5107]: > logger="UnhandledError" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.683905 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4vppd" event={"ID":"65e3191d-a6c4-4983-aa24-9f03af38c82b","Type":"ContainerStarted","Data":"67e8ace11acd5e80a800e5f1230bd37507e2648e2e256f2e351ae259ebd8aac2"} Jan 26 00:10:31 crc kubenswrapper[5107]: E0126 00:10:31.685742 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:31 crc kubenswrapper[5107]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 26 00:10:31 crc kubenswrapper[5107]: if [[ -f "/env/_master" ]]; then Jan 26 00:10:31 crc kubenswrapper[5107]: set -o allexport Jan 26 00:10:31 crc kubenswrapper[5107]: source "/env/_master" Jan 26 00:10:31 crc kubenswrapper[5107]: set +o allexport Jan 26 00:10:31 crc kubenswrapper[5107]: fi Jan 26 00:10:31 crc kubenswrapper[5107]: Jan 26 00:10:31 crc kubenswrapper[5107]: ovn_v4_join_subnet_opt= Jan 26 00:10:31 crc kubenswrapper[5107]: if [[ "" != "" ]]; then Jan 26 00:10:31 crc kubenswrapper[5107]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 26 00:10:31 crc kubenswrapper[5107]: fi Jan 26 00:10:31 crc kubenswrapper[5107]: ovn_v6_join_subnet_opt= Jan 26 00:10:31 crc kubenswrapper[5107]: if [[ "" != "" ]]; then Jan 26 00:10:31 crc kubenswrapper[5107]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 26 00:10:31 crc kubenswrapper[5107]: fi Jan 26 00:10:31 crc kubenswrapper[5107]: Jan 26 00:10:31 crc kubenswrapper[5107]: ovn_v4_transit_switch_subnet_opt= Jan 26 00:10:31 crc kubenswrapper[5107]: if [[ "" != "" ]]; then Jan 26 00:10:31 crc kubenswrapper[5107]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 26 00:10:31 crc kubenswrapper[5107]: fi Jan 26 00:10:31 crc kubenswrapper[5107]: ovn_v6_transit_switch_subnet_opt= Jan 26 00:10:31 crc kubenswrapper[5107]: if [[ "" != "" ]]; then Jan 26 00:10:31 crc kubenswrapper[5107]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 26 00:10:31 crc kubenswrapper[5107]: fi Jan 26 00:10:31 crc kubenswrapper[5107]: Jan 26 00:10:31 crc kubenswrapper[5107]: dns_name_resolver_enabled_flag= Jan 26 00:10:31 crc kubenswrapper[5107]: if [[ "false" == "true" ]]; then Jan 26 00:10:31 crc kubenswrapper[5107]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 26 00:10:31 crc kubenswrapper[5107]: fi Jan 26 00:10:31 crc kubenswrapper[5107]: Jan 26 00:10:31 crc kubenswrapper[5107]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 26 00:10:31 crc kubenswrapper[5107]: Jan 26 00:10:31 crc kubenswrapper[5107]: # This is needed so that converting clusters from GA to TP Jan 26 00:10:31 crc kubenswrapper[5107]: # will rollout control plane pods as well Jan 26 00:10:31 crc kubenswrapper[5107]: network_segmentation_enabled_flag= Jan 26 00:10:31 crc kubenswrapper[5107]: multi_network_enabled_flag= Jan 26 00:10:31 crc kubenswrapper[5107]: if [[ "true" == "true" ]]; then Jan 26 00:10:31 crc kubenswrapper[5107]: multi_network_enabled_flag="--enable-multi-network" Jan 26 00:10:31 crc kubenswrapper[5107]: fi Jan 26 00:10:31 crc kubenswrapper[5107]: if [[ "true" == "true" ]]; then Jan 26 00:10:31 crc kubenswrapper[5107]: if [[ "true" != "true" ]]; then Jan 26 00:10:31 crc kubenswrapper[5107]: multi_network_enabled_flag="--enable-multi-network" Jan 26 00:10:31 crc kubenswrapper[5107]: fi Jan 26 00:10:31 crc kubenswrapper[5107]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 26 00:10:31 crc kubenswrapper[5107]: fi Jan 26 00:10:31 crc kubenswrapper[5107]: Jan 26 00:10:31 crc kubenswrapper[5107]: route_advertisements_enable_flag= Jan 26 00:10:31 crc kubenswrapper[5107]: if [[ "false" == "true" ]]; then Jan 26 00:10:31 crc kubenswrapper[5107]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 26 00:10:31 crc kubenswrapper[5107]: fi Jan 26 00:10:31 crc kubenswrapper[5107]: Jan 26 00:10:31 crc kubenswrapper[5107]: preconfigured_udn_addresses_enable_flag= Jan 26 00:10:31 crc kubenswrapper[5107]: if [[ "false" == "true" ]]; then Jan 26 00:10:31 crc kubenswrapper[5107]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 26 00:10:31 crc kubenswrapper[5107]: fi Jan 26 00:10:31 crc kubenswrapper[5107]: Jan 26 00:10:31 crc kubenswrapper[5107]: # Enable multi-network policy if configured (control-plane always full mode) Jan 26 00:10:31 crc kubenswrapper[5107]: multi_network_policy_enabled_flag= Jan 26 00:10:31 crc kubenswrapper[5107]: if [[ "false" == "true" ]]; then Jan 26 00:10:31 crc kubenswrapper[5107]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 26 00:10:31 crc kubenswrapper[5107]: fi Jan 26 00:10:31 crc kubenswrapper[5107]: Jan 26 00:10:31 crc kubenswrapper[5107]: # Enable admin network policy if configured (control-plane always full mode) Jan 26 00:10:31 crc kubenswrapper[5107]: admin_network_policy_enabled_flag= Jan 26 00:10:31 crc kubenswrapper[5107]: if [[ "true" == "true" ]]; then Jan 26 00:10:31 crc kubenswrapper[5107]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 26 00:10:31 crc kubenswrapper[5107]: fi Jan 26 00:10:31 crc kubenswrapper[5107]: Jan 26 00:10:31 crc kubenswrapper[5107]: if [ "shared" == "shared" ]; then Jan 26 00:10:31 crc kubenswrapper[5107]: gateway_mode_flags="--gateway-mode shared" Jan 26 00:10:31 crc kubenswrapper[5107]: elif [ "shared" == "local" ]; then Jan 26 00:10:31 crc kubenswrapper[5107]: gateway_mode_flags="--gateway-mode local" Jan 26 00:10:31 crc kubenswrapper[5107]: else Jan 26 00:10:31 crc kubenswrapper[5107]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 26 00:10:31 crc kubenswrapper[5107]: exit 1 Jan 26 00:10:31 crc kubenswrapper[5107]: fi Jan 26 00:10:31 crc kubenswrapper[5107]: Jan 26 00:10:31 crc kubenswrapper[5107]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 26 00:10:31 crc kubenswrapper[5107]: exec /usr/bin/ovnkube \ Jan 26 00:10:31 crc kubenswrapper[5107]: --enable-interconnect \ Jan 26 00:10:31 crc kubenswrapper[5107]: --init-cluster-manager "${K8S_NODE}" \ Jan 26 00:10:31 crc kubenswrapper[5107]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 26 00:10:31 crc kubenswrapper[5107]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 26 00:10:31 crc kubenswrapper[5107]: --metrics-bind-address "127.0.0.1:29108" \ Jan 26 00:10:31 crc kubenswrapper[5107]: --metrics-enable-pprof \ Jan 26 00:10:31 crc kubenswrapper[5107]: --metrics-enable-config-duration \ Jan 26 00:10:31 crc kubenswrapper[5107]: ${ovn_v4_join_subnet_opt} \ Jan 26 00:10:31 crc kubenswrapper[5107]: ${ovn_v6_join_subnet_opt} \ Jan 26 00:10:31 crc kubenswrapper[5107]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 26 00:10:31 crc kubenswrapper[5107]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 26 00:10:31 crc kubenswrapper[5107]: ${dns_name_resolver_enabled_flag} \ Jan 26 00:10:31 crc kubenswrapper[5107]: ${persistent_ips_enabled_flag} \ Jan 26 00:10:31 crc kubenswrapper[5107]: ${multi_network_enabled_flag} \ Jan 26 00:10:31 crc kubenswrapper[5107]: ${network_segmentation_enabled_flag} \ Jan 26 00:10:31 crc kubenswrapper[5107]: ${gateway_mode_flags} \ Jan 26 00:10:31 crc kubenswrapper[5107]: ${route_advertisements_enable_flag} \ Jan 26 00:10:31 crc kubenswrapper[5107]: ${preconfigured_udn_addresses_enable_flag} \ Jan 26 00:10:31 crc kubenswrapper[5107]: --enable-egress-ip=true \ Jan 26 00:10:31 crc kubenswrapper[5107]: --enable-egress-firewall=true \ Jan 26 00:10:31 crc kubenswrapper[5107]: --enable-egress-qos=true \ Jan 26 00:10:31 crc kubenswrapper[5107]: --enable-egress-service=true \ Jan 26 00:10:31 crc kubenswrapper[5107]: --enable-multicast \ Jan 26 00:10:31 crc kubenswrapper[5107]: --enable-multi-external-gateway=true \ Jan 26 00:10:31 crc kubenswrapper[5107]: ${multi_network_policy_enabled_flag} \ Jan 26 00:10:31 crc kubenswrapper[5107]: ${admin_network_policy_enabled_flag} Jan 26 00:10:31 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nm2qk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-kcwjn_openshift-ovn-kubernetes(ec13f4fa-c252-4f6a-9a31-43f70366ae48): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:31 crc kubenswrapper[5107]: > logger="UnhandledError" Jan 26 00:10:31 crc kubenswrapper[5107]: E0126 00:10:31.686045 5107 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wb77l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-4vppd_openshift-multus(65e3191d-a6c4-4983-aa24-9f03af38c82b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 26 00:10:31 crc kubenswrapper[5107]: E0126 00:10:31.686958 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" podUID="ec13f4fa-c252-4f6a-9a31-43f70366ae48" Jan 26 00:10:31 crc kubenswrapper[5107]: E0126 00:10:31.687252 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-4vppd" podUID="65e3191d-a6c4-4983-aa24-9f03af38c82b" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.700569 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"504c44df-fe93-44f1-bab1-0ea8b1eb3980\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://47fa690b41b05a971d8e2d25a105b0c873282b4794f352165354120564685e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c1d676a79dd2425942bd62e4d423f98509d8fbdce526ec4174c8f201faab13c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e6a9e0e1088ec6d6c55e9c40410af1e160ce01e045855d38afe83fae0f283ad1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cce459ad004254e8afec72b815e731aa25828326ffe317a8dd4ac064ffc744fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce459ad004254e8afec72b815e731aa25828326ffe317a8dd4ac064ffc744fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"message\\\":\\\"o:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0126 00:10:04.015346 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 00:10:04.015512 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0126 00:10:04.016590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-910484192/tls.crt::/tmp/serving-cert-910484192/tls.key\\\\\\\"\\\\nI0126 00:10:04.772936 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:10:04.776564 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:10:04.777520 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:10:04.777595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:10:04.777608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:10:04.782656 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 00:10:04.782703 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:10:04.782726 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:04.782734 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:04.782738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:10:04.782742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:10:04.782745 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:10:04.782749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:10:04.785948 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:10:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://77afb4ec3e1993d3627dfd57b2c724e127e0b709358c469f86fe32abae3a75a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.774198 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.774257 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.774276 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.774296 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.774313 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.778163 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b63ba3-1e0f-4fc0-8c1f-0c667403148c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a32faca2b6b353b711ddefefc6c8849adfa0a7790893f7c1faa5a3f9d703fddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://322b4a3e2a376c541682895450ed098e45acabe88d84fda4adbc15c56d32ab5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ab1303ede901dbcb4161028c6937d2b8c3d5c9bed4e1b0e53f56f5f2d84ac85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e262e6f9f8205c48c94e191de6b6732c6294e9f794db6f66c90b561ec016e455\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.795839 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.822179 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.857331 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4vppd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e3191d-a6c4-4983-aa24-9f03af38c82b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4vppd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.876338 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.876416 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.876428 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.876449 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.876461 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.879868 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-metrics-certs\") pod \"network-metrics-daemon-bdn4m\" (UID: \"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\") " pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:10:31 crc kubenswrapper[5107]: E0126 00:10:31.880135 5107 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:31 crc kubenswrapper[5107]: E0126 00:10:31.880251 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-metrics-certs podName:93b5402e-3f3e-4e3b-8cf4-f919871d0c86 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:33.880220983 +0000 UTC m=+78.797815489 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-metrics-certs") pod "network-metrics-daemon-bdn4m" (UID: "93b5402e-3f3e-4e3b-8cf4-f919871d0c86") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.897225 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d907601-1852-43f9-8a70-ef4e71351e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94c4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.936595 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdn4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdn4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.976465 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec13f4fa-c252-4f6a-9a31-43f70366ae48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm2qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm2qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-kcwjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.979395 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.979433 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.979444 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.979496 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:31 crc kubenswrapper[5107]: I0126 00:10:31.979510 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:31Z","lastTransitionTime":"2026-01-26T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.014655 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd7339a-991d-4b65-8e8c-d3b049e9fa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be5bcbd76c10288ba86ec209af691e631a5c24d4f596b8b2a22be27a2e5b6026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.058574 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.081901 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.082057 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.082100 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.082114 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.082131 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.082427 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:32Z","lastTransitionTime":"2026-01-26T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:32 crc kubenswrapper[5107]: E0126 00:10:32.082595 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:34.082566357 +0000 UTC m=+79.000160703 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.098053 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.199843 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:10:32 crc kubenswrapper[5107]: E0126 00:10:32.200046 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.200661 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:32 crc kubenswrapper[5107]: E0126 00:10:32.200770 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.201213 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.201301 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.201317 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.201342 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.201360 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:32Z","lastTransitionTime":"2026-01-26T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.204557 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.205804 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.207793 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.210369 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.212945 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.214554 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.217137 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.219017 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.219828 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.221611 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.223234 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.223182 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d12cfb26-8718-4def-8f36-c7eaa12bc463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nvznv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.227182 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.228125 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.230263 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.230904 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.231971 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.235865 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.237492 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.239396 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.240712 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.243781 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.245637 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-f2mpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e5342d5-2d0c-458d-94b7-25c802ce298a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75l2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f2mpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.246073 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.247650 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.249067 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.250692 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.252100 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.261822 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"870e1d1f-d5de-4cb0-afd3-e32ee3e21ad9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1c2645c2d7f91e355504de88c19902bd7091a30b8fb1e6bffe3bd643d9ae87e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6cd1e8970c9bd97f174884ae8760b3f67982935515109cac7fc2423d03e2cdc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c4c9a487362af2080d699cfb3c25b37fee4ea7ee71fe4c120513c8a93e345bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.271653 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.272822 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.275994 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.277283 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.278935 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.280820 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.283295 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.284731 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.286430 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.287255 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.288349 5107 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.288566 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.292631 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.294032 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.296066 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.297946 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.298664 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.300035 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.301649 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.302413 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.304044 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.304898 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.304944 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.304956 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.304975 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.304987 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:32Z","lastTransitionTime":"2026-01-26T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.305633 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.307292 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.308392 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.310112 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.311436 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.313566 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.315471 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.317969 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.318998 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.320812 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.322236 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.407148 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.407553 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.407677 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.407765 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.407841 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:32Z","lastTransitionTime":"2026-01-26T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.458319 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b926ca7e-55ee-4b84-a5c2-3eea448cf9c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://80635fc424a05f12a9bb60d0ceb42d4a25d7bbc065e69e32316354bfa3c1c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2986b22cb8ac794f2297b2bb06e60e4f85638acb9c56a9ccf8a86e5d42ae8251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://976d360799dd0382ad10776370c3db39c364353d2a4c9ffdd339503160e251db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1d272e8f9f86eb13c31a8613165562354adc102c6c7674464a48f4c72fc4a3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3d24145582663d59a87810736c9cba433c006ed3baf7391cf09c2341c5e6b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.653733 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.655196 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.655272 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.655289 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.655315 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.655327 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:32Z","lastTransitionTime":"2026-01-26T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.673313 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.686524 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p96sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6f097f-b642-4bc7-ae13-b78dad78b73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ptwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p96sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.698798 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wbn74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e0e338-0636-411c-ac3c-9972beecf25b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnj62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wbn74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.714673 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"504c44df-fe93-44f1-bab1-0ea8b1eb3980\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://47fa690b41b05a971d8e2d25a105b0c873282b4794f352165354120564685e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c1d676a79dd2425942bd62e4d423f98509d8fbdce526ec4174c8f201faab13c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e6a9e0e1088ec6d6c55e9c40410af1e160ce01e045855d38afe83fae0f283ad1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8771a49a10f3f3e07f25647aa9c52ba74dae813bb12b4e2d0f80e6996482bd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce459ad004254e8afec72b815e731aa25828326ffe317a8dd4ac064ffc744fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"message\\\":\\\"o:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0126 00:10:04.015346 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 00:10:04.015512 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0126 00:10:04.016590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-910484192/tls.crt::/tmp/serving-cert-910484192/tls.key\\\\\\\"\\\\nI0126 00:10:04.772936 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:10:04.776564 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:10:04.777520 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:10:04.777595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:10:04.777608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:10:04.782656 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 00:10:04.782703 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:10:04.782726 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:04.782734 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:04.782738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:10:04.782742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:10:04.782745 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:10:04.782749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:10:04.785948 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:10:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:30Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://77afb4ec3e1993d3627dfd57b2c724e127e0b709358c469f86fe32abae3a75a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.731101 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b63ba3-1e0f-4fc0-8c1f-0c667403148c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a32faca2b6b353b711ddefefc6c8849adfa0a7790893f7c1faa5a3f9d703fddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://322b4a3e2a376c541682895450ed098e45acabe88d84fda4adbc15c56d32ab5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ab1303ede901dbcb4161028c6937d2b8c3d5c9bed4e1b0e53f56f5f2d84ac85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e262e6f9f8205c48c94e191de6b6732c6294e9f794db6f66c90b561ec016e455\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.756171 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.757091 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.757117 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.757126 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.757140 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.757149 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:32Z","lastTransitionTime":"2026-01-26T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.771284 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.783069 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4vppd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e3191d-a6c4-4983-aa24-9f03af38c82b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4vppd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.795993 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d907601-1852-43f9-8a70-ef4e71351e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94c4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.808217 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdn4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdn4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.818751 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec13f4fa-c252-4f6a-9a31-43f70366ae48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm2qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm2qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-kcwjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.828710 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd7339a-991d-4b65-8e8c-d3b049e9fa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be5bcbd76c10288ba86ec209af691e631a5c24d4f596b8b2a22be27a2e5b6026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.841984 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.857554 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.858964 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.859011 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.859028 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.859049 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.859066 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:32Z","lastTransitionTime":"2026-01-26T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.911771 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d12cfb26-8718-4def-8f36-c7eaa12bc463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nvznv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.938770 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-f2mpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e5342d5-2d0c-458d-94b7-25c802ce298a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75l2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f2mpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.957601 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.957656 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:32 crc kubenswrapper[5107]: E0126 00:10:32.957838 5107 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:32 crc kubenswrapper[5107]: E0126 00:10:32.957982 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:36.95795604 +0000 UTC m=+81.875550386 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:32 crc kubenswrapper[5107]: E0126 00:10:32.958368 5107 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:32 crc kubenswrapper[5107]: E0126 00:10:32.958403 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:36.958395152 +0000 UTC m=+81.875989498 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.962305 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.962342 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.962353 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.962371 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.962382 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:32Z","lastTransitionTime":"2026-01-26T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:32 crc kubenswrapper[5107]: I0126 00:10:32.986977 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"870e1d1f-d5de-4cb0-afd3-e32ee3e21ad9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1c2645c2d7f91e355504de88c19902bd7091a30b8fb1e6bffe3bd643d9ae87e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6cd1e8970c9bd97f174884ae8760b3f67982935515109cac7fc2423d03e2cdc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c4c9a487362af2080d699cfb3c25b37fee4ea7ee71fe4c120513c8a93e345bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.026637 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b926ca7e-55ee-4b84-a5c2-3eea448cf9c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://80635fc424a05f12a9bb60d0ceb42d4a25d7bbc065e69e32316354bfa3c1c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2986b22cb8ac794f2297b2bb06e60e4f85638acb9c56a9ccf8a86e5d42ae8251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://976d360799dd0382ad10776370c3db39c364353d2a4c9ffdd339503160e251db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1d272e8f9f86eb13c31a8613165562354adc102c6c7674464a48f4c72fc4a3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3d24145582663d59a87810736c9cba433c006ed3baf7391cf09c2341c5e6b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.056816 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.058397 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.058459 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:33 crc kubenswrapper[5107]: E0126 00:10:33.058682 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:33 crc kubenswrapper[5107]: E0126 00:10:33.058714 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:33 crc kubenswrapper[5107]: E0126 00:10:33.058729 5107 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:33 crc kubenswrapper[5107]: E0126 00:10:33.058841 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:37.058795712 +0000 UTC m=+81.976390058 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:33 crc kubenswrapper[5107]: E0126 00:10:33.059096 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:33 crc kubenswrapper[5107]: E0126 00:10:33.059223 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:33 crc kubenswrapper[5107]: E0126 00:10:33.059321 5107 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:33 crc kubenswrapper[5107]: E0126 00:10:33.059561 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:37.059528023 +0000 UTC m=+81.977122369 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.064725 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.064939 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.065007 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.065110 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.065171 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:33Z","lastTransitionTime":"2026-01-26T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.093481 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.112668 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:33 crc kubenswrapper[5107]: E0126 00:10:33.112809 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.113230 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:33 crc kubenswrapper[5107]: E0126 00:10:33.113304 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.132740 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p96sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6f097f-b642-4bc7-ae13-b78dad78b73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ptwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p96sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.167747 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.167796 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.167808 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.167825 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.167836 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:33Z","lastTransitionTime":"2026-01-26T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.173295 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wbn74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e0e338-0636-411c-ac3c-9972beecf25b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnj62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wbn74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.270085 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.270128 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.270140 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.270156 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.270167 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:33Z","lastTransitionTime":"2026-01-26T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.372549 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.372600 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.372610 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.372628 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.372641 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:33Z","lastTransitionTime":"2026-01-26T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.476315 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.476363 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.476372 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.476388 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.476399 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:33Z","lastTransitionTime":"2026-01-26T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.579247 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.579302 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.579313 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.579333 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.579346 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:33Z","lastTransitionTime":"2026-01-26T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.682210 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.682269 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.682284 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.682304 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.682316 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:33Z","lastTransitionTime":"2026-01-26T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.785066 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.785124 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.785135 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.785154 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.785166 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:33Z","lastTransitionTime":"2026-01-26T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.887625 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.887693 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.887711 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.887738 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.887752 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:33Z","lastTransitionTime":"2026-01-26T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.970185 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-metrics-certs\") pod \"network-metrics-daemon-bdn4m\" (UID: \"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\") " pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:10:33 crc kubenswrapper[5107]: E0126 00:10:33.970457 5107 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:33 crc kubenswrapper[5107]: E0126 00:10:33.970668 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-metrics-certs podName:93b5402e-3f3e-4e3b-8cf4-f919871d0c86 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:37.970632672 +0000 UTC m=+82.888227018 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-metrics-certs") pod "network-metrics-daemon-bdn4m" (UID: "93b5402e-3f3e-4e3b-8cf4-f919871d0c86") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.990415 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.990497 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.990522 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.990552 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:33 crc kubenswrapper[5107]: I0126 00:10:33.990576 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:33Z","lastTransitionTime":"2026-01-26T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.093319 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.093406 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.093428 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.093464 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.093507 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:34Z","lastTransitionTime":"2026-01-26T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.112814 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.112877 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:34 crc kubenswrapper[5107]: E0126 00:10:34.112995 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:10:34 crc kubenswrapper[5107]: E0126 00:10:34.113154 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.172273 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:34 crc kubenswrapper[5107]: E0126 00:10:34.172408 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:38.172384828 +0000 UTC m=+83.089979174 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.195955 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.196022 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.196036 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.196058 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.196073 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:34Z","lastTransitionTime":"2026-01-26T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.298830 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.298943 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.298960 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.298981 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.298994 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:34Z","lastTransitionTime":"2026-01-26T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.401778 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.401838 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.401852 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.401872 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.401902 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:34Z","lastTransitionTime":"2026-01-26T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.504281 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.504366 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.504391 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.504422 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.504443 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:34Z","lastTransitionTime":"2026-01-26T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.607784 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.607864 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.607926 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.607955 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.607992 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:34Z","lastTransitionTime":"2026-01-26T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.710764 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.710833 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.710847 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.710874 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.710908 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:34Z","lastTransitionTime":"2026-01-26T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.813477 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.813563 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.813581 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.813610 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.813630 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:34Z","lastTransitionTime":"2026-01-26T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.916980 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.917065 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.917082 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.917110 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:34 crc kubenswrapper[5107]: I0126 00:10:34.917128 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:34Z","lastTransitionTime":"2026-01-26T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.019483 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.019529 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.019538 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.019561 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.019571 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:35Z","lastTransitionTime":"2026-01-26T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.112691 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:35 crc kubenswrapper[5107]: E0126 00:10:35.112867 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.112964 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:35 crc kubenswrapper[5107]: E0126 00:10:35.113162 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.122616 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.122682 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.122695 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.122716 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.122732 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:35Z","lastTransitionTime":"2026-01-26T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.225578 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.225659 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.225673 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.225694 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.225709 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:35Z","lastTransitionTime":"2026-01-26T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.329089 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.329160 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.329179 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.329202 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.329217 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:35Z","lastTransitionTime":"2026-01-26T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.432537 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.432602 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.432624 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.432648 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.432661 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:35Z","lastTransitionTime":"2026-01-26T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.535190 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.535240 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.535252 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.535269 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.535282 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:35Z","lastTransitionTime":"2026-01-26T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.637732 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.637781 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.637790 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.637806 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.637818 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:35Z","lastTransitionTime":"2026-01-26T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.742854 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.742931 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.742941 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.742970 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.742982 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:35Z","lastTransitionTime":"2026-01-26T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.845553 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.845613 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.845626 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.845647 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.845658 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:35Z","lastTransitionTime":"2026-01-26T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.948078 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.948149 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.948163 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.948184 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:35 crc kubenswrapper[5107]: I0126 00:10:35.948196 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:35Z","lastTransitionTime":"2026-01-26T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.050577 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.050636 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.050647 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.050666 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.050678 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:36Z","lastTransitionTime":"2026-01-26T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.113121 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.113175 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:10:36 crc kubenswrapper[5107]: E0126 00:10:36.113292 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:36 crc kubenswrapper[5107]: E0126 00:10:36.113545 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.134661 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d12cfb26-8718-4def-8f36-c7eaa12bc463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nvznv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.149449 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-f2mpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e5342d5-2d0c-458d-94b7-25c802ce298a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75l2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f2mpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.153205 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.153278 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.153327 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.153369 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.153400 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:36Z","lastTransitionTime":"2026-01-26T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.162038 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"870e1d1f-d5de-4cb0-afd3-e32ee3e21ad9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1c2645c2d7f91e355504de88c19902bd7091a30b8fb1e6bffe3bd643d9ae87e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6cd1e8970c9bd97f174884ae8760b3f67982935515109cac7fc2423d03e2cdc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c4c9a487362af2080d699cfb3c25b37fee4ea7ee71fe4c120513c8a93e345bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.185621 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b926ca7e-55ee-4b84-a5c2-3eea448cf9c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://80635fc424a05f12a9bb60d0ceb42d4a25d7bbc065e69e32316354bfa3c1c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2986b22cb8ac794f2297b2bb06e60e4f85638acb9c56a9ccf8a86e5d42ae8251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://976d360799dd0382ad10776370c3db39c364353d2a4c9ffdd339503160e251db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1d272e8f9f86eb13c31a8613165562354adc102c6c7674464a48f4c72fc4a3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3d24145582663d59a87810736c9cba433c006ed3baf7391cf09c2341c5e6b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.207042 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.223070 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.234333 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p96sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6f097f-b642-4bc7-ae13-b78dad78b73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ptwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p96sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.294142 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.294199 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.294214 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.294236 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.294251 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:36Z","lastTransitionTime":"2026-01-26T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.301323 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wbn74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e0e338-0636-411c-ac3c-9972beecf25b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnj62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wbn74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.314837 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"504c44df-fe93-44f1-bab1-0ea8b1eb3980\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://47fa690b41b05a971d8e2d25a105b0c873282b4794f352165354120564685e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c1d676a79dd2425942bd62e4d423f98509d8fbdce526ec4174c8f201faab13c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e6a9e0e1088ec6d6c55e9c40410af1e160ce01e045855d38afe83fae0f283ad1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8771a49a10f3f3e07f25647aa9c52ba74dae813bb12b4e2d0f80e6996482bd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce459ad004254e8afec72b815e731aa25828326ffe317a8dd4ac064ffc744fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"message\\\":\\\"o:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0126 00:10:04.015346 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 00:10:04.015512 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0126 00:10:04.016590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-910484192/tls.crt::/tmp/serving-cert-910484192/tls.key\\\\\\\"\\\\nI0126 00:10:04.772936 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:10:04.776564 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:10:04.777520 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:10:04.777595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:10:04.777608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:10:04.782656 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 00:10:04.782703 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:10:04.782726 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:04.782734 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:04.782738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:10:04.782742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:10:04.782745 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:10:04.782749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:10:04.785948 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:10:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:30Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://77afb4ec3e1993d3627dfd57b2c724e127e0b709358c469f86fe32abae3a75a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.328642 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b63ba3-1e0f-4fc0-8c1f-0c667403148c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a32faca2b6b353b711ddefefc6c8849adfa0a7790893f7c1faa5a3f9d703fddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://322b4a3e2a376c541682895450ed098e45acabe88d84fda4adbc15c56d32ab5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ab1303ede901dbcb4161028c6937d2b8c3d5c9bed4e1b0e53f56f5f2d84ac85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e262e6f9f8205c48c94e191de6b6732c6294e9f794db6f66c90b561ec016e455\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.339897 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.350458 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.363256 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4vppd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e3191d-a6c4-4983-aa24-9f03af38c82b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4vppd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.375398 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d907601-1852-43f9-8a70-ef4e71351e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94c4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.386029 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdn4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdn4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.396862 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec13f4fa-c252-4f6a-9a31-43f70366ae48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm2qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm2qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-kcwjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.397307 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.397369 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.397380 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.397401 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.397413 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:36Z","lastTransitionTime":"2026-01-26T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.408629 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd7339a-991d-4b65-8e8c-d3b049e9fa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be5bcbd76c10288ba86ec209af691e631a5c24d4f596b8b2a22be27a2e5b6026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.423745 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.436573 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.500070 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.500143 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.500159 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.500182 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.500197 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:36Z","lastTransitionTime":"2026-01-26T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.602842 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.602991 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.603038 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.603057 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.603068 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:36Z","lastTransitionTime":"2026-01-26T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.704820 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.704864 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.704874 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.704903 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.704915 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:36Z","lastTransitionTime":"2026-01-26T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.807472 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.808430 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.808505 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.808585 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.808678 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:36Z","lastTransitionTime":"2026-01-26T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.912186 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.912240 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.912250 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.912268 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:36 crc kubenswrapper[5107]: I0126 00:10:36.912280 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:36Z","lastTransitionTime":"2026-01-26T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.008192 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.008300 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:37 crc kubenswrapper[5107]: E0126 00:10:37.008398 5107 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:37 crc kubenswrapper[5107]: E0126 00:10:37.008489 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:45.008469132 +0000 UTC m=+89.926063478 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:37 crc kubenswrapper[5107]: E0126 00:10:37.008606 5107 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:37 crc kubenswrapper[5107]: E0126 00:10:37.008811 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:45.00876164 +0000 UTC m=+89.926356136 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.014785 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.014989 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.015009 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.015035 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.015047 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:37Z","lastTransitionTime":"2026-01-26T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.109911 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.109969 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:37 crc kubenswrapper[5107]: E0126 00:10:37.110062 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:37 crc kubenswrapper[5107]: E0126 00:10:37.110097 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:37 crc kubenswrapper[5107]: E0126 00:10:37.110109 5107 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:37 crc kubenswrapper[5107]: E0126 00:10:37.110122 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:37 crc kubenswrapper[5107]: E0126 00:10:37.110140 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:37 crc kubenswrapper[5107]: E0126 00:10:37.110154 5107 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:37 crc kubenswrapper[5107]: E0126 00:10:37.110217 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:45.110197699 +0000 UTC m=+90.027792045 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:37 crc kubenswrapper[5107]: E0126 00:10:37.110238 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:45.1102319 +0000 UTC m=+90.027826246 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.112745 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.112836 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:37 crc kubenswrapper[5107]: E0126 00:10:37.112978 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:37 crc kubenswrapper[5107]: E0126 00:10:37.113153 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.117295 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.117333 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.117343 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.117360 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.117370 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:37Z","lastTransitionTime":"2026-01-26T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.219943 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.220024 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.220035 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.220065 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.220081 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:37Z","lastTransitionTime":"2026-01-26T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.322195 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.322250 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.322261 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.322276 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.322287 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:37Z","lastTransitionTime":"2026-01-26T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.425426 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.425546 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.425560 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.425586 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.425599 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:37Z","lastTransitionTime":"2026-01-26T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.528345 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.528418 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.528440 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.528466 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.528485 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:37Z","lastTransitionTime":"2026-01-26T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.631940 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.632011 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.632024 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.632048 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.632063 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:37Z","lastTransitionTime":"2026-01-26T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.735027 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.735077 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.735090 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.735109 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.735124 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:37Z","lastTransitionTime":"2026-01-26T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.838449 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.838552 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.838600 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.838630 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.838653 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:37Z","lastTransitionTime":"2026-01-26T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.941469 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.941530 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.941542 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.941559 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:37 crc kubenswrapper[5107]: I0126 00:10:37.941571 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:37Z","lastTransitionTime":"2026-01-26T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.021920 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-metrics-certs\") pod \"network-metrics-daemon-bdn4m\" (UID: \"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\") " pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:10:38 crc kubenswrapper[5107]: E0126 00:10:38.022206 5107 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:38 crc kubenswrapper[5107]: E0126 00:10:38.022357 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-metrics-certs podName:93b5402e-3f3e-4e3b-8cf4-f919871d0c86 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:46.022315356 +0000 UTC m=+90.939909752 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-metrics-certs") pod "network-metrics-daemon-bdn4m" (UID: "93b5402e-3f3e-4e3b-8cf4-f919871d0c86") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.045210 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.045279 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.045293 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.045316 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.045335 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:38Z","lastTransitionTime":"2026-01-26T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.112348 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:10:38 crc kubenswrapper[5107]: E0126 00:10:38.112494 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.112651 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:38 crc kubenswrapper[5107]: E0126 00:10:38.112938 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.147212 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.147295 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.147306 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.147363 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.147378 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:38Z","lastTransitionTime":"2026-01-26T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.224006 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:38 crc kubenswrapper[5107]: E0126 00:10:38.224282 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:10:46.224232127 +0000 UTC m=+91.141826513 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.251452 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.251527 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.251541 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.251563 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.251579 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:38Z","lastTransitionTime":"2026-01-26T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.354290 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.354359 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.354375 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.354395 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.354408 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:38Z","lastTransitionTime":"2026-01-26T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.457118 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.457187 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.457206 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.457226 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.457239 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:38Z","lastTransitionTime":"2026-01-26T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.560160 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.560233 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.560251 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.560274 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.560288 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:38Z","lastTransitionTime":"2026-01-26T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.663182 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.663246 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.663256 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.663279 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.663292 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:38Z","lastTransitionTime":"2026-01-26T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.765621 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.765742 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.765764 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.765791 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.765810 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:38Z","lastTransitionTime":"2026-01-26T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.868861 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.868983 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.869007 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.869035 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.869058 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:38Z","lastTransitionTime":"2026-01-26T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.972079 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.972148 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.972160 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.972179 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:38 crc kubenswrapper[5107]: I0126 00:10:38.972233 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:38Z","lastTransitionTime":"2026-01-26T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.075024 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.075082 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.075096 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.075116 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.075129 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:39Z","lastTransitionTime":"2026-01-26T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.102870 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.102967 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.102981 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.103052 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.103073 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:39Z","lastTransitionTime":"2026-01-26T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.112609 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:39 crc kubenswrapper[5107]: E0126 00:10:39.112762 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.112812 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:39 crc kubenswrapper[5107]: E0126 00:10:39.112858 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:39 crc kubenswrapper[5107]: E0126 00:10:39.123839 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"066ffcb3-e507-457f-8c26-3fe6d538369f\\\",\\\"systemUUID\\\":\\\"d9c41fe3-854d-4f0f-b42d-bfcf817b111c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.128667 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.128704 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.128716 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.128734 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.128747 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:39Z","lastTransitionTime":"2026-01-26T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:39 crc kubenswrapper[5107]: E0126 00:10:39.139417 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"066ffcb3-e507-457f-8c26-3fe6d538369f\\\",\\\"systemUUID\\\":\\\"d9c41fe3-854d-4f0f-b42d-bfcf817b111c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.143709 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.143769 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.143782 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.143803 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.143816 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:39Z","lastTransitionTime":"2026-01-26T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:39 crc kubenswrapper[5107]: E0126 00:10:39.156029 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"066ffcb3-e507-457f-8c26-3fe6d538369f\\\",\\\"systemUUID\\\":\\\"d9c41fe3-854d-4f0f-b42d-bfcf817b111c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.160648 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.160793 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.160871 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.160977 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.161057 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:39Z","lastTransitionTime":"2026-01-26T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:39 crc kubenswrapper[5107]: E0126 00:10:39.171771 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"066ffcb3-e507-457f-8c26-3fe6d538369f\\\",\\\"systemUUID\\\":\\\"d9c41fe3-854d-4f0f-b42d-bfcf817b111c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.176603 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.176705 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.176854 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.177038 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.177179 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:39Z","lastTransitionTime":"2026-01-26T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:39 crc kubenswrapper[5107]: E0126 00:10:39.191290 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"066ffcb3-e507-457f-8c26-3fe6d538369f\\\",\\\"systemUUID\\\":\\\"d9c41fe3-854d-4f0f-b42d-bfcf817b111c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:39 crc kubenswrapper[5107]: E0126 00:10:39.191418 5107 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.193819 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.193852 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.193862 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.193879 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.193904 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:39Z","lastTransitionTime":"2026-01-26T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.298527 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.298595 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.298607 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.298628 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.298645 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:39Z","lastTransitionTime":"2026-01-26T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.401969 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.402040 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.402057 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.402081 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.402099 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:39Z","lastTransitionTime":"2026-01-26T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.505997 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.506056 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.506070 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.506093 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.506112 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:39Z","lastTransitionTime":"2026-01-26T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.608755 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.608826 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.608843 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.608866 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.608903 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:39Z","lastTransitionTime":"2026-01-26T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.712913 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.712984 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.713003 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.713029 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.713047 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:39Z","lastTransitionTime":"2026-01-26T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.816536 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.816605 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.816619 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.816647 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.816662 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:39Z","lastTransitionTime":"2026-01-26T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.919366 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.919417 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.919427 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.919444 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:39 crc kubenswrapper[5107]: I0126 00:10:39.919455 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:39Z","lastTransitionTime":"2026-01-26T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.022214 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.022275 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.022288 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.022314 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.022327 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:40Z","lastTransitionTime":"2026-01-26T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.112208 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.112298 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:40 crc kubenswrapper[5107]: E0126 00:10:40.112415 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:10:40 crc kubenswrapper[5107]: E0126 00:10:40.112512 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.125635 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.126103 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.126258 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.126417 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.126537 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:40Z","lastTransitionTime":"2026-01-26T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.230429 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.230500 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.230515 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.230542 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.230562 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:40Z","lastTransitionTime":"2026-01-26T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.333675 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.333741 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.333754 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.333774 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.333790 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:40Z","lastTransitionTime":"2026-01-26T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.436396 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.436457 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.436473 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.436504 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.436525 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:40Z","lastTransitionTime":"2026-01-26T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.539591 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.539766 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.539789 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.539825 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.539856 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:40Z","lastTransitionTime":"2026-01-26T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.642958 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.643052 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.643079 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.643299 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.643325 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:40Z","lastTransitionTime":"2026-01-26T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.745918 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.745982 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.745992 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.746011 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.746023 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:40Z","lastTransitionTime":"2026-01-26T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.849732 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.849797 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.849809 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.849828 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.849841 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:40Z","lastTransitionTime":"2026-01-26T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.953302 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.953413 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.953428 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.953453 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:40 crc kubenswrapper[5107]: I0126 00:10:40.953466 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:40Z","lastTransitionTime":"2026-01-26T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.056447 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.056564 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.056584 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.056646 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.056668 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:41Z","lastTransitionTime":"2026-01-26T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.112972 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.113001 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:41 crc kubenswrapper[5107]: E0126 00:10:41.113146 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:41 crc kubenswrapper[5107]: E0126 00:10:41.113311 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.159136 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.159197 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.159211 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.159228 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.159239 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:41Z","lastTransitionTime":"2026-01-26T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.262042 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.262101 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.262116 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.262136 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.262148 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:41Z","lastTransitionTime":"2026-01-26T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.365126 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.365186 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.365198 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.365218 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.365230 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:41Z","lastTransitionTime":"2026-01-26T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.468135 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.468215 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.468230 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.468321 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.468336 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:41Z","lastTransitionTime":"2026-01-26T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.571137 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.571242 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.571261 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.571282 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.571297 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:41Z","lastTransitionTime":"2026-01-26T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.674184 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.674594 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.674687 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.674779 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.674866 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:41Z","lastTransitionTime":"2026-01-26T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.777129 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.777216 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.777238 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.777276 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.777301 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:41Z","lastTransitionTime":"2026-01-26T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.880059 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.880377 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.880446 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.880718 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.880812 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:41Z","lastTransitionTime":"2026-01-26T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.982589 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.982650 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.982664 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.982684 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:41 crc kubenswrapper[5107]: I0126 00:10:41.982703 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:41Z","lastTransitionTime":"2026-01-26T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.087288 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.087365 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.087380 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.087408 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.087425 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:42Z","lastTransitionTime":"2026-01-26T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.112142 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.112206 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:42 crc kubenswrapper[5107]: E0126 00:10:42.112352 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:10:42 crc kubenswrapper[5107]: E0126 00:10:42.112552 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.190717 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.190788 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.190801 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.190823 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.190838 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:42Z","lastTransitionTime":"2026-01-26T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.293896 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.293950 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.293962 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.293979 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.293991 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:42Z","lastTransitionTime":"2026-01-26T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.396505 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.397048 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.397259 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.397467 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.397646 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:42Z","lastTransitionTime":"2026-01-26T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.500731 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.501092 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.501175 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.501254 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.501340 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:42Z","lastTransitionTime":"2026-01-26T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.604527 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.604588 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.604604 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.604625 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.604640 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:42Z","lastTransitionTime":"2026-01-26T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.694441 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.707037 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.707081 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.707091 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.707104 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.707115 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:42Z","lastTransitionTime":"2026-01-26T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.710855 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-f2mpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e5342d5-2d0c-458d-94b7-25c802ce298a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75l2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f2mpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.725867 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"870e1d1f-d5de-4cb0-afd3-e32ee3e21ad9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1c2645c2d7f91e355504de88c19902bd7091a30b8fb1e6bffe3bd643d9ae87e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6cd1e8970c9bd97f174884ae8760b3f67982935515109cac7fc2423d03e2cdc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c4c9a487362af2080d699cfb3c25b37fee4ea7ee71fe4c120513c8a93e345bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.748033 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b926ca7e-55ee-4b84-a5c2-3eea448cf9c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://80635fc424a05f12a9bb60d0ceb42d4a25d7bbc065e69e32316354bfa3c1c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2986b22cb8ac794f2297b2bb06e60e4f85638acb9c56a9ccf8a86e5d42ae8251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://976d360799dd0382ad10776370c3db39c364353d2a4c9ffdd339503160e251db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1d272e8f9f86eb13c31a8613165562354adc102c6c7674464a48f4c72fc4a3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3d24145582663d59a87810736c9cba433c006ed3baf7391cf09c2341c5e6b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.761553 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.776504 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.788347 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p96sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6f097f-b642-4bc7-ae13-b78dad78b73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ptwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p96sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.801296 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wbn74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e0e338-0636-411c-ac3c-9972beecf25b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnj62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wbn74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.809498 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.809578 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.809601 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.809628 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.809646 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:42Z","lastTransitionTime":"2026-01-26T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.827520 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"504c44df-fe93-44f1-bab1-0ea8b1eb3980\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://47fa690b41b05a971d8e2d25a105b0c873282b4794f352165354120564685e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c1d676a79dd2425942bd62e4d423f98509d8fbdce526ec4174c8f201faab13c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e6a9e0e1088ec6d6c55e9c40410af1e160ce01e045855d38afe83fae0f283ad1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8771a49a10f3f3e07f25647aa9c52ba74dae813bb12b4e2d0f80e6996482bd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce459ad004254e8afec72b815e731aa25828326ffe317a8dd4ac064ffc744fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"message\\\":\\\"o:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0126 00:10:04.015346 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 00:10:04.015512 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0126 00:10:04.016590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-910484192/tls.crt::/tmp/serving-cert-910484192/tls.key\\\\\\\"\\\\nI0126 00:10:04.772936 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:10:04.776564 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:10:04.777520 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:10:04.777595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:10:04.777608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:10:04.782656 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 00:10:04.782703 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:10:04.782726 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:04.782734 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:04.782738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:10:04.782742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:10:04.782745 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:10:04.782749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:10:04.785948 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:10:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:30Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://77afb4ec3e1993d3627dfd57b2c724e127e0b709358c469f86fe32abae3a75a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.848565 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b63ba3-1e0f-4fc0-8c1f-0c667403148c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a32faca2b6b353b711ddefefc6c8849adfa0a7790893f7c1faa5a3f9d703fddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://322b4a3e2a376c541682895450ed098e45acabe88d84fda4adbc15c56d32ab5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ab1303ede901dbcb4161028c6937d2b8c3d5c9bed4e1b0e53f56f5f2d84ac85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e262e6f9f8205c48c94e191de6b6732c6294e9f794db6f66c90b561ec016e455\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.868140 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.885802 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.903530 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4vppd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e3191d-a6c4-4983-aa24-9f03af38c82b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4vppd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.912217 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.912414 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.912511 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.912634 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.912738 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:42Z","lastTransitionTime":"2026-01-26T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.916103 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d907601-1852-43f9-8a70-ef4e71351e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94c4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.928778 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdn4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdn4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.944866 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec13f4fa-c252-4f6a-9a31-43f70366ae48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm2qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm2qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-kcwjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.956227 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd7339a-991d-4b65-8e8c-d3b049e9fa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be5bcbd76c10288ba86ec209af691e631a5c24d4f596b8b2a22be27a2e5b6026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.972270 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:42 crc kubenswrapper[5107]: I0126 00:10:42.985056 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.004077 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d12cfb26-8718-4def-8f36-c7eaa12bc463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nvznv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.016025 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.016474 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.016578 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.016681 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.016784 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:43Z","lastTransitionTime":"2026-01-26T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.112745 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:43 crc kubenswrapper[5107]: E0126 00:10:43.113221 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.113526 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:43 crc kubenswrapper[5107]: E0126 00:10:43.113987 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:43 crc kubenswrapper[5107]: E0126 00:10:43.115152 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:43 crc kubenswrapper[5107]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 26 00:10:43 crc kubenswrapper[5107]: set -uo pipefail Jan 26 00:10:43 crc kubenswrapper[5107]: Jan 26 00:10:43 crc kubenswrapper[5107]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 26 00:10:43 crc kubenswrapper[5107]: Jan 26 00:10:43 crc kubenswrapper[5107]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 26 00:10:43 crc kubenswrapper[5107]: HOSTS_FILE="/etc/hosts" Jan 26 00:10:43 crc kubenswrapper[5107]: TEMP_FILE="/tmp/hosts.tmp" Jan 26 00:10:43 crc kubenswrapper[5107]: Jan 26 00:10:43 crc kubenswrapper[5107]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 26 00:10:43 crc kubenswrapper[5107]: Jan 26 00:10:43 crc kubenswrapper[5107]: # Make a temporary file with the old hosts file's attributes. Jan 26 00:10:43 crc kubenswrapper[5107]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 26 00:10:43 crc kubenswrapper[5107]: echo "Failed to preserve hosts file. Exiting." Jan 26 00:10:43 crc kubenswrapper[5107]: exit 1 Jan 26 00:10:43 crc kubenswrapper[5107]: fi Jan 26 00:10:43 crc kubenswrapper[5107]: Jan 26 00:10:43 crc kubenswrapper[5107]: while true; do Jan 26 00:10:43 crc kubenswrapper[5107]: declare -A svc_ips Jan 26 00:10:43 crc kubenswrapper[5107]: for svc in "${services[@]}"; do Jan 26 00:10:43 crc kubenswrapper[5107]: # Fetch service IP from cluster dns if present. We make several tries Jan 26 00:10:43 crc kubenswrapper[5107]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 26 00:10:43 crc kubenswrapper[5107]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 26 00:10:43 crc kubenswrapper[5107]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 26 00:10:43 crc kubenswrapper[5107]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 26 00:10:43 crc kubenswrapper[5107]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 26 00:10:43 crc kubenswrapper[5107]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 26 00:10:43 crc kubenswrapper[5107]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 26 00:10:43 crc kubenswrapper[5107]: for i in ${!cmds[*]} Jan 26 00:10:43 crc kubenswrapper[5107]: do Jan 26 00:10:43 crc kubenswrapper[5107]: ips=($(eval "${cmds[i]}")) Jan 26 00:10:43 crc kubenswrapper[5107]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 26 00:10:43 crc kubenswrapper[5107]: svc_ips["${svc}"]="${ips[@]}" Jan 26 00:10:43 crc kubenswrapper[5107]: break Jan 26 00:10:43 crc kubenswrapper[5107]: fi Jan 26 00:10:43 crc kubenswrapper[5107]: done Jan 26 00:10:43 crc kubenswrapper[5107]: done Jan 26 00:10:43 crc kubenswrapper[5107]: Jan 26 00:10:43 crc kubenswrapper[5107]: # Update /etc/hosts only if we get valid service IPs Jan 26 00:10:43 crc kubenswrapper[5107]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 26 00:10:43 crc kubenswrapper[5107]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 26 00:10:43 crc kubenswrapper[5107]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 26 00:10:43 crc kubenswrapper[5107]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 26 00:10:43 crc kubenswrapper[5107]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 26 00:10:43 crc kubenswrapper[5107]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 26 00:10:43 crc kubenswrapper[5107]: sleep 60 & wait Jan 26 00:10:43 crc kubenswrapper[5107]: continue Jan 26 00:10:43 crc kubenswrapper[5107]: fi Jan 26 00:10:43 crc kubenswrapper[5107]: Jan 26 00:10:43 crc kubenswrapper[5107]: # Append resolver entries for services Jan 26 00:10:43 crc kubenswrapper[5107]: rc=0 Jan 26 00:10:43 crc kubenswrapper[5107]: for svc in "${!svc_ips[@]}"; do Jan 26 00:10:43 crc kubenswrapper[5107]: for ip in ${svc_ips[${svc}]}; do Jan 26 00:10:43 crc kubenswrapper[5107]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 26 00:10:43 crc kubenswrapper[5107]: done Jan 26 00:10:43 crc kubenswrapper[5107]: done Jan 26 00:10:43 crc kubenswrapper[5107]: if [[ $rc -ne 0 ]]; then Jan 26 00:10:43 crc kubenswrapper[5107]: sleep 60 & wait Jan 26 00:10:43 crc kubenswrapper[5107]: continue Jan 26 00:10:43 crc kubenswrapper[5107]: fi Jan 26 00:10:43 crc kubenswrapper[5107]: Jan 26 00:10:43 crc kubenswrapper[5107]: Jan 26 00:10:43 crc kubenswrapper[5107]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 26 00:10:43 crc kubenswrapper[5107]: # Replace /etc/hosts with our modified version if needed Jan 26 00:10:43 crc kubenswrapper[5107]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 26 00:10:43 crc kubenswrapper[5107]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 26 00:10:43 crc kubenswrapper[5107]: fi Jan 26 00:10:43 crc kubenswrapper[5107]: sleep 60 & wait Jan 26 00:10:43 crc kubenswrapper[5107]: unset svc_ips Jan 26 00:10:43 crc kubenswrapper[5107]: done Jan 26 00:10:43 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rnj62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-wbn74_openshift-dns(65e0e338-0636-411c-ac3c-9972beecf25b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:43 crc kubenswrapper[5107]: > logger="UnhandledError" Jan 26 00:10:43 crc kubenswrapper[5107]: E0126 00:10:43.115637 5107 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mzkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-94c4c_openshift-machine-config-operator(7d907601-1852-43f9-8a70-ef4e71351e81): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 26 00:10:43 crc kubenswrapper[5107]: E0126 00:10:43.115759 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:43 crc kubenswrapper[5107]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 26 00:10:43 crc kubenswrapper[5107]: set -o allexport Jan 26 00:10:43 crc kubenswrapper[5107]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 26 00:10:43 crc kubenswrapper[5107]: source /etc/kubernetes/apiserver-url.env Jan 26 00:10:43 crc kubenswrapper[5107]: else Jan 26 00:10:43 crc kubenswrapper[5107]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 26 00:10:43 crc kubenswrapper[5107]: exit 1 Jan 26 00:10:43 crc kubenswrapper[5107]: fi Jan 26 00:10:43 crc kubenswrapper[5107]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 26 00:10:43 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:43 crc kubenswrapper[5107]: > logger="UnhandledError" Jan 26 00:10:43 crc kubenswrapper[5107]: E0126 00:10:43.116981 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-wbn74" podUID="65e0e338-0636-411c-ac3c-9972beecf25b" Jan 26 00:10:43 crc kubenswrapper[5107]: E0126 00:10:43.117039 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 26 00:10:43 crc kubenswrapper[5107]: E0126 00:10:43.118120 5107 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mzkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-94c4c_openshift-machine-config-operator(7d907601-1852-43f9-8a70-ef4e71351e81): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 26 00:10:43 crc kubenswrapper[5107]: E0126 00:10:43.119334 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.124000 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.124068 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.124086 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.124108 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.124122 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:43Z","lastTransitionTime":"2026-01-26T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.227146 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.227202 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.227214 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.227233 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.227246 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:43Z","lastTransitionTime":"2026-01-26T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.330136 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.330196 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.330212 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.330231 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.330242 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:43Z","lastTransitionTime":"2026-01-26T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.433587 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.433648 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.433672 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.433721 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.433738 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:43Z","lastTransitionTime":"2026-01-26T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.537516 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.537670 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.537695 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.537725 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.537750 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:43Z","lastTransitionTime":"2026-01-26T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.597526 5107 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.640676 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.640744 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.640764 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.640802 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.640820 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:43Z","lastTransitionTime":"2026-01-26T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.743658 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.744101 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.744295 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.744432 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.744580 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:43Z","lastTransitionTime":"2026-01-26T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.847954 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.848005 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.848014 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.848031 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.848041 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:43Z","lastTransitionTime":"2026-01-26T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.950115 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.950505 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.950578 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.950654 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:43 crc kubenswrapper[5107]: I0126 00:10:43.950725 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:43Z","lastTransitionTime":"2026-01-26T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.053529 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.054442 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.054493 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.054530 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.054569 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:44Z","lastTransitionTime":"2026-01-26T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.112462 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.112923 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:10:44 crc kubenswrapper[5107]: E0126 00:10:44.113330 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:10:44 crc kubenswrapper[5107]: E0126 00:10:44.113576 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:44 crc kubenswrapper[5107]: E0126 00:10:44.115691 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:44 crc kubenswrapper[5107]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 26 00:10:44 crc kubenswrapper[5107]: set -euo pipefail Jan 26 00:10:44 crc kubenswrapper[5107]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 26 00:10:44 crc kubenswrapper[5107]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 26 00:10:44 crc kubenswrapper[5107]: # As the secret mount is optional we must wait for the files to be present. Jan 26 00:10:44 crc kubenswrapper[5107]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 26 00:10:44 crc kubenswrapper[5107]: TS=$(date +%s) Jan 26 00:10:44 crc kubenswrapper[5107]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 26 00:10:44 crc kubenswrapper[5107]: HAS_LOGGED_INFO=0 Jan 26 00:10:44 crc kubenswrapper[5107]: Jan 26 00:10:44 crc kubenswrapper[5107]: log_missing_certs(){ Jan 26 00:10:44 crc kubenswrapper[5107]: CUR_TS=$(date +%s) Jan 26 00:10:44 crc kubenswrapper[5107]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 26 00:10:44 crc kubenswrapper[5107]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 26 00:10:44 crc kubenswrapper[5107]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 26 00:10:44 crc kubenswrapper[5107]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 26 00:10:44 crc kubenswrapper[5107]: HAS_LOGGED_INFO=1 Jan 26 00:10:44 crc kubenswrapper[5107]: fi Jan 26 00:10:44 crc kubenswrapper[5107]: } Jan 26 00:10:44 crc kubenswrapper[5107]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 26 00:10:44 crc kubenswrapper[5107]: log_missing_certs Jan 26 00:10:44 crc kubenswrapper[5107]: sleep 5 Jan 26 00:10:44 crc kubenswrapper[5107]: done Jan 26 00:10:44 crc kubenswrapper[5107]: Jan 26 00:10:44 crc kubenswrapper[5107]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 26 00:10:44 crc kubenswrapper[5107]: exec /usr/bin/kube-rbac-proxy \ Jan 26 00:10:44 crc kubenswrapper[5107]: --logtostderr \ Jan 26 00:10:44 crc kubenswrapper[5107]: --secure-listen-address=:9108 \ Jan 26 00:10:44 crc kubenswrapper[5107]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 26 00:10:44 crc kubenswrapper[5107]: --upstream=http://127.0.0.1:29108/ \ Jan 26 00:10:44 crc kubenswrapper[5107]: --tls-private-key-file=${TLS_PK} \ Jan 26 00:10:44 crc kubenswrapper[5107]: --tls-cert-file=${TLS_CERT} Jan 26 00:10:44 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nm2qk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-kcwjn_openshift-ovn-kubernetes(ec13f4fa-c252-4f6a-9a31-43f70366ae48): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:44 crc kubenswrapper[5107]: > logger="UnhandledError" Jan 26 00:10:44 crc kubenswrapper[5107]: E0126 00:10:44.115980 5107 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wb77l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-4vppd_openshift-multus(65e3191d-a6c4-4983-aa24-9f03af38c82b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 26 00:10:44 crc kubenswrapper[5107]: E0126 00:10:44.116431 5107 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 26 00:10:44 crc kubenswrapper[5107]: E0126 00:10:44.117258 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-4vppd" podUID="65e3191d-a6c4-4983-aa24-9f03af38c82b" Jan 26 00:10:44 crc kubenswrapper[5107]: E0126 00:10:44.117830 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 26 00:10:44 crc kubenswrapper[5107]: E0126 00:10:44.118760 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:44 crc kubenswrapper[5107]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 26 00:10:44 crc kubenswrapper[5107]: if [[ -f "/env/_master" ]]; then Jan 26 00:10:44 crc kubenswrapper[5107]: set -o allexport Jan 26 00:10:44 crc kubenswrapper[5107]: source "/env/_master" Jan 26 00:10:44 crc kubenswrapper[5107]: set +o allexport Jan 26 00:10:44 crc kubenswrapper[5107]: fi Jan 26 00:10:44 crc kubenswrapper[5107]: Jan 26 00:10:44 crc kubenswrapper[5107]: ovn_v4_join_subnet_opt= Jan 26 00:10:44 crc kubenswrapper[5107]: if [[ "" != "" ]]; then Jan 26 00:10:44 crc kubenswrapper[5107]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 26 00:10:44 crc kubenswrapper[5107]: fi Jan 26 00:10:44 crc kubenswrapper[5107]: ovn_v6_join_subnet_opt= Jan 26 00:10:44 crc kubenswrapper[5107]: if [[ "" != "" ]]; then Jan 26 00:10:44 crc kubenswrapper[5107]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 26 00:10:44 crc kubenswrapper[5107]: fi Jan 26 00:10:44 crc kubenswrapper[5107]: Jan 26 00:10:44 crc kubenswrapper[5107]: ovn_v4_transit_switch_subnet_opt= Jan 26 00:10:44 crc kubenswrapper[5107]: if [[ "" != "" ]]; then Jan 26 00:10:44 crc kubenswrapper[5107]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 26 00:10:44 crc kubenswrapper[5107]: fi Jan 26 00:10:44 crc kubenswrapper[5107]: ovn_v6_transit_switch_subnet_opt= Jan 26 00:10:44 crc kubenswrapper[5107]: if [[ "" != "" ]]; then Jan 26 00:10:44 crc kubenswrapper[5107]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 26 00:10:44 crc kubenswrapper[5107]: fi Jan 26 00:10:44 crc kubenswrapper[5107]: Jan 26 00:10:44 crc kubenswrapper[5107]: dns_name_resolver_enabled_flag= Jan 26 00:10:44 crc kubenswrapper[5107]: if [[ "false" == "true" ]]; then Jan 26 00:10:44 crc kubenswrapper[5107]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 26 00:10:44 crc kubenswrapper[5107]: fi Jan 26 00:10:44 crc kubenswrapper[5107]: Jan 26 00:10:44 crc kubenswrapper[5107]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 26 00:10:44 crc kubenswrapper[5107]: Jan 26 00:10:44 crc kubenswrapper[5107]: # This is needed so that converting clusters from GA to TP Jan 26 00:10:44 crc kubenswrapper[5107]: # will rollout control plane pods as well Jan 26 00:10:44 crc kubenswrapper[5107]: network_segmentation_enabled_flag= Jan 26 00:10:44 crc kubenswrapper[5107]: multi_network_enabled_flag= Jan 26 00:10:44 crc kubenswrapper[5107]: if [[ "true" == "true" ]]; then Jan 26 00:10:44 crc kubenswrapper[5107]: multi_network_enabled_flag="--enable-multi-network" Jan 26 00:10:44 crc kubenswrapper[5107]: fi Jan 26 00:10:44 crc kubenswrapper[5107]: if [[ "true" == "true" ]]; then Jan 26 00:10:44 crc kubenswrapper[5107]: if [[ "true" != "true" ]]; then Jan 26 00:10:44 crc kubenswrapper[5107]: multi_network_enabled_flag="--enable-multi-network" Jan 26 00:10:44 crc kubenswrapper[5107]: fi Jan 26 00:10:44 crc kubenswrapper[5107]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 26 00:10:44 crc kubenswrapper[5107]: fi Jan 26 00:10:44 crc kubenswrapper[5107]: Jan 26 00:10:44 crc kubenswrapper[5107]: route_advertisements_enable_flag= Jan 26 00:10:44 crc kubenswrapper[5107]: if [[ "false" == "true" ]]; then Jan 26 00:10:44 crc kubenswrapper[5107]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 26 00:10:44 crc kubenswrapper[5107]: fi Jan 26 00:10:44 crc kubenswrapper[5107]: Jan 26 00:10:44 crc kubenswrapper[5107]: preconfigured_udn_addresses_enable_flag= Jan 26 00:10:44 crc kubenswrapper[5107]: if [[ "false" == "true" ]]; then Jan 26 00:10:44 crc kubenswrapper[5107]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 26 00:10:44 crc kubenswrapper[5107]: fi Jan 26 00:10:44 crc kubenswrapper[5107]: Jan 26 00:10:44 crc kubenswrapper[5107]: # Enable multi-network policy if configured (control-plane always full mode) Jan 26 00:10:44 crc kubenswrapper[5107]: multi_network_policy_enabled_flag= Jan 26 00:10:44 crc kubenswrapper[5107]: if [[ "false" == "true" ]]; then Jan 26 00:10:44 crc kubenswrapper[5107]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 26 00:10:44 crc kubenswrapper[5107]: fi Jan 26 00:10:44 crc kubenswrapper[5107]: Jan 26 00:10:44 crc kubenswrapper[5107]: # Enable admin network policy if configured (control-plane always full mode) Jan 26 00:10:44 crc kubenswrapper[5107]: admin_network_policy_enabled_flag= Jan 26 00:10:44 crc kubenswrapper[5107]: if [[ "true" == "true" ]]; then Jan 26 00:10:44 crc kubenswrapper[5107]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 26 00:10:44 crc kubenswrapper[5107]: fi Jan 26 00:10:44 crc kubenswrapper[5107]: Jan 26 00:10:44 crc kubenswrapper[5107]: if [ "shared" == "shared" ]; then Jan 26 00:10:44 crc kubenswrapper[5107]: gateway_mode_flags="--gateway-mode shared" Jan 26 00:10:44 crc kubenswrapper[5107]: elif [ "shared" == "local" ]; then Jan 26 00:10:44 crc kubenswrapper[5107]: gateway_mode_flags="--gateway-mode local" Jan 26 00:10:44 crc kubenswrapper[5107]: else Jan 26 00:10:44 crc kubenswrapper[5107]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 26 00:10:44 crc kubenswrapper[5107]: exit 1 Jan 26 00:10:44 crc kubenswrapper[5107]: fi Jan 26 00:10:44 crc kubenswrapper[5107]: Jan 26 00:10:44 crc kubenswrapper[5107]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 26 00:10:44 crc kubenswrapper[5107]: exec /usr/bin/ovnkube \ Jan 26 00:10:44 crc kubenswrapper[5107]: --enable-interconnect \ Jan 26 00:10:44 crc kubenswrapper[5107]: --init-cluster-manager "${K8S_NODE}" \ Jan 26 00:10:44 crc kubenswrapper[5107]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 26 00:10:44 crc kubenswrapper[5107]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 26 00:10:44 crc kubenswrapper[5107]: --metrics-bind-address "127.0.0.1:29108" \ Jan 26 00:10:44 crc kubenswrapper[5107]: --metrics-enable-pprof \ Jan 26 00:10:44 crc kubenswrapper[5107]: --metrics-enable-config-duration \ Jan 26 00:10:44 crc kubenswrapper[5107]: ${ovn_v4_join_subnet_opt} \ Jan 26 00:10:44 crc kubenswrapper[5107]: ${ovn_v6_join_subnet_opt} \ Jan 26 00:10:44 crc kubenswrapper[5107]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 26 00:10:44 crc kubenswrapper[5107]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 26 00:10:44 crc kubenswrapper[5107]: ${dns_name_resolver_enabled_flag} \ Jan 26 00:10:44 crc kubenswrapper[5107]: ${persistent_ips_enabled_flag} \ Jan 26 00:10:44 crc kubenswrapper[5107]: ${multi_network_enabled_flag} \ Jan 26 00:10:44 crc kubenswrapper[5107]: ${network_segmentation_enabled_flag} \ Jan 26 00:10:44 crc kubenswrapper[5107]: ${gateway_mode_flags} \ Jan 26 00:10:44 crc kubenswrapper[5107]: ${route_advertisements_enable_flag} \ Jan 26 00:10:44 crc kubenswrapper[5107]: ${preconfigured_udn_addresses_enable_flag} \ Jan 26 00:10:44 crc kubenswrapper[5107]: --enable-egress-ip=true \ Jan 26 00:10:44 crc kubenswrapper[5107]: --enable-egress-firewall=true \ Jan 26 00:10:44 crc kubenswrapper[5107]: --enable-egress-qos=true \ Jan 26 00:10:44 crc kubenswrapper[5107]: --enable-egress-service=true \ Jan 26 00:10:44 crc kubenswrapper[5107]: --enable-multicast \ Jan 26 00:10:44 crc kubenswrapper[5107]: --enable-multi-external-gateway=true \ Jan 26 00:10:44 crc kubenswrapper[5107]: ${multi_network_policy_enabled_flag} \ Jan 26 00:10:44 crc kubenswrapper[5107]: ${admin_network_policy_enabled_flag} Jan 26 00:10:44 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nm2qk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-kcwjn_openshift-ovn-kubernetes(ec13f4fa-c252-4f6a-9a31-43f70366ae48): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:44 crc kubenswrapper[5107]: > logger="UnhandledError" Jan 26 00:10:44 crc kubenswrapper[5107]: E0126 00:10:44.120362 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" podUID="ec13f4fa-c252-4f6a-9a31-43f70366ae48" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.157453 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.157532 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.157544 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.157563 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.157574 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:44Z","lastTransitionTime":"2026-01-26T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.260132 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.260214 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.260231 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.260260 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.260282 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:44Z","lastTransitionTime":"2026-01-26T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.364309 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.364391 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.364412 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.364445 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.364466 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:44Z","lastTransitionTime":"2026-01-26T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.467705 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.467793 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.467815 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.467834 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.467868 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:44Z","lastTransitionTime":"2026-01-26T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.570520 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.570592 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.570612 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.570638 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.570655 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:44Z","lastTransitionTime":"2026-01-26T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.672764 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.672820 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.672835 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.672851 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.672862 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:44Z","lastTransitionTime":"2026-01-26T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.775451 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.775517 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.775537 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.775562 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.775583 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:44Z","lastTransitionTime":"2026-01-26T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.878301 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.878353 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.878365 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.878382 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.878397 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:44Z","lastTransitionTime":"2026-01-26T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.981206 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.981270 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.981281 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.981301 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:44 crc kubenswrapper[5107]: I0126 00:10:44.981311 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:44Z","lastTransitionTime":"2026-01-26T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.012581 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.012672 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:45 crc kubenswrapper[5107]: E0126 00:10:45.012823 5107 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:45 crc kubenswrapper[5107]: E0126 00:10:45.012848 5107 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:45 crc kubenswrapper[5107]: E0126 00:10:45.013000 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:01.012956292 +0000 UTC m=+105.930550638 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:45 crc kubenswrapper[5107]: E0126 00:10:45.013028 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:01.013018884 +0000 UTC m=+105.930613230 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.084250 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.084306 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.084319 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.084340 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.084351 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:45Z","lastTransitionTime":"2026-01-26T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.114276 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.114346 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.114413 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:45 crc kubenswrapper[5107]: E0126 00:10:45.114490 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:45 crc kubenswrapper[5107]: E0126 00:10:45.114533 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:45 crc kubenswrapper[5107]: E0126 00:10:45.114553 5107 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:45 crc kubenswrapper[5107]: E0126 00:10:45.114499 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:45 crc kubenswrapper[5107]: E0126 00:10:45.114627 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:01.114603337 +0000 UTC m=+106.032197723 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:45 crc kubenswrapper[5107]: E0126 00:10:45.114634 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:45 crc kubenswrapper[5107]: E0126 00:10:45.114654 5107 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:45 crc kubenswrapper[5107]: E0126 00:10:45.114754 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:01.114735471 +0000 UTC m=+106.032329857 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:45 crc kubenswrapper[5107]: E0126 00:10:45.114872 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.115053 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:45 crc kubenswrapper[5107]: E0126 00:10:45.115867 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:45 crc kubenswrapper[5107]: E0126 00:10:45.116037 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:45 crc kubenswrapper[5107]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 26 00:10:45 crc kubenswrapper[5107]: apiVersion: v1 Jan 26 00:10:45 crc kubenswrapper[5107]: clusters: Jan 26 00:10:45 crc kubenswrapper[5107]: - cluster: Jan 26 00:10:45 crc kubenswrapper[5107]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 26 00:10:45 crc kubenswrapper[5107]: server: https://api-int.crc.testing:6443 Jan 26 00:10:45 crc kubenswrapper[5107]: name: default-cluster Jan 26 00:10:45 crc kubenswrapper[5107]: contexts: Jan 26 00:10:45 crc kubenswrapper[5107]: - context: Jan 26 00:10:45 crc kubenswrapper[5107]: cluster: default-cluster Jan 26 00:10:45 crc kubenswrapper[5107]: namespace: default Jan 26 00:10:45 crc kubenswrapper[5107]: user: default-auth Jan 26 00:10:45 crc kubenswrapper[5107]: name: default-context Jan 26 00:10:45 crc kubenswrapper[5107]: current-context: default-context Jan 26 00:10:45 crc kubenswrapper[5107]: kind: Config Jan 26 00:10:45 crc kubenswrapper[5107]: preferences: {} Jan 26 00:10:45 crc kubenswrapper[5107]: users: Jan 26 00:10:45 crc kubenswrapper[5107]: - name: default-auth Jan 26 00:10:45 crc kubenswrapper[5107]: user: Jan 26 00:10:45 crc kubenswrapper[5107]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 26 00:10:45 crc kubenswrapper[5107]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 26 00:10:45 crc kubenswrapper[5107]: EOF Jan 26 00:10:45 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bm9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-nvznv_openshift-ovn-kubernetes(d12cfb26-8718-4def-8f36-c7eaa12bc463): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:45 crc kubenswrapper[5107]: > logger="UnhandledError" Jan 26 00:10:45 crc kubenswrapper[5107]: E0126 00:10:45.116768 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:45 crc kubenswrapper[5107]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 26 00:10:45 crc kubenswrapper[5107]: while [ true ]; Jan 26 00:10:45 crc kubenswrapper[5107]: do Jan 26 00:10:45 crc kubenswrapper[5107]: for f in $(ls /tmp/serviceca); do Jan 26 00:10:45 crc kubenswrapper[5107]: echo $f Jan 26 00:10:45 crc kubenswrapper[5107]: ca_file_path="/tmp/serviceca/${f}" Jan 26 00:10:45 crc kubenswrapper[5107]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 26 00:10:45 crc kubenswrapper[5107]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 26 00:10:45 crc kubenswrapper[5107]: if [ -e "${reg_dir_path}" ]; then Jan 26 00:10:45 crc kubenswrapper[5107]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 26 00:10:45 crc kubenswrapper[5107]: else Jan 26 00:10:45 crc kubenswrapper[5107]: mkdir $reg_dir_path Jan 26 00:10:45 crc kubenswrapper[5107]: cp $ca_file_path $reg_dir_path/ca.crt Jan 26 00:10:45 crc kubenswrapper[5107]: fi Jan 26 00:10:45 crc kubenswrapper[5107]: done Jan 26 00:10:45 crc kubenswrapper[5107]: for d in $(ls /etc/docker/certs.d); do Jan 26 00:10:45 crc kubenswrapper[5107]: echo $d Jan 26 00:10:45 crc kubenswrapper[5107]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 26 00:10:45 crc kubenswrapper[5107]: reg_conf_path="/tmp/serviceca/${dp}" Jan 26 00:10:45 crc kubenswrapper[5107]: if [ ! -e "${reg_conf_path}" ]; then Jan 26 00:10:45 crc kubenswrapper[5107]: rm -rf /etc/docker/certs.d/$d Jan 26 00:10:45 crc kubenswrapper[5107]: fi Jan 26 00:10:45 crc kubenswrapper[5107]: done Jan 26 00:10:45 crc kubenswrapper[5107]: sleep 60 & wait ${!} Jan 26 00:10:45 crc kubenswrapper[5107]: done Jan 26 00:10:45 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ptwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-p96sx_openshift-image-registry(4f6f097f-b642-4bc7-ae13-b78dad78b73e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:45 crc kubenswrapper[5107]: > logger="UnhandledError" Jan 26 00:10:45 crc kubenswrapper[5107]: E0126 00:10:45.117611 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" Jan 26 00:10:45 crc kubenswrapper[5107]: E0126 00:10:45.118750 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-p96sx" podUID="4f6f097f-b642-4bc7-ae13-b78dad78b73e" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.186954 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.187037 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.187056 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.187079 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.187096 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:45Z","lastTransitionTime":"2026-01-26T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.289663 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.289719 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.289732 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.289749 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.289762 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:45Z","lastTransitionTime":"2026-01-26T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.392830 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.392985 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.393011 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.393029 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.393041 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:45Z","lastTransitionTime":"2026-01-26T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.495560 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.495645 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.495666 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.495693 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.495711 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:45Z","lastTransitionTime":"2026-01-26T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.598178 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.598232 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.598244 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.598267 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.598280 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:45Z","lastTransitionTime":"2026-01-26T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.700804 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.700857 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.700871 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.700905 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.700919 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:45Z","lastTransitionTime":"2026-01-26T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.803583 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.803645 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.803664 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.803687 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.803706 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:45Z","lastTransitionTime":"2026-01-26T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.907292 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.907360 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.907374 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.907397 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:45 crc kubenswrapper[5107]: I0126 00:10:45.907416 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:45Z","lastTransitionTime":"2026-01-26T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.010455 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.010538 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.010557 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.010586 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.010607 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:46Z","lastTransitionTime":"2026-01-26T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.025480 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-metrics-certs\") pod \"network-metrics-daemon-bdn4m\" (UID: \"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\") " pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:10:46 crc kubenswrapper[5107]: E0126 00:10:46.025726 5107 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:46 crc kubenswrapper[5107]: E0126 00:10:46.025868 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-metrics-certs podName:93b5402e-3f3e-4e3b-8cf4-f919871d0c86 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:02.02583426 +0000 UTC m=+106.943428636 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-metrics-certs") pod "network-metrics-daemon-bdn4m" (UID: "93b5402e-3f3e-4e3b-8cf4-f919871d0c86") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.112826 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:46 crc kubenswrapper[5107]: E0126 00:10:46.113066 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.113102 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:10:46 crc kubenswrapper[5107]: E0126 00:10:46.113579 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.114882 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.114958 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.114977 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.115002 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.115021 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:46Z","lastTransitionTime":"2026-01-26T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:46 crc kubenswrapper[5107]: E0126 00:10:46.115732 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:46 crc kubenswrapper[5107]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 26 00:10:46 crc kubenswrapper[5107]: if [[ -f "/env/_master" ]]; then Jan 26 00:10:46 crc kubenswrapper[5107]: set -o allexport Jan 26 00:10:46 crc kubenswrapper[5107]: source "/env/_master" Jan 26 00:10:46 crc kubenswrapper[5107]: set +o allexport Jan 26 00:10:46 crc kubenswrapper[5107]: fi Jan 26 00:10:46 crc kubenswrapper[5107]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 26 00:10:46 crc kubenswrapper[5107]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 26 00:10:46 crc kubenswrapper[5107]: ho_enable="--enable-hybrid-overlay" Jan 26 00:10:46 crc kubenswrapper[5107]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 26 00:10:46 crc kubenswrapper[5107]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 26 00:10:46 crc kubenswrapper[5107]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 26 00:10:46 crc kubenswrapper[5107]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 26 00:10:46 crc kubenswrapper[5107]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 26 00:10:46 crc kubenswrapper[5107]: --webhook-host=127.0.0.1 \ Jan 26 00:10:46 crc kubenswrapper[5107]: --webhook-port=9743 \ Jan 26 00:10:46 crc kubenswrapper[5107]: ${ho_enable} \ Jan 26 00:10:46 crc kubenswrapper[5107]: --enable-interconnect \ Jan 26 00:10:46 crc kubenswrapper[5107]: --disable-approver \ Jan 26 00:10:46 crc kubenswrapper[5107]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 26 00:10:46 crc kubenswrapper[5107]: --wait-for-kubernetes-api=200s \ Jan 26 00:10:46 crc kubenswrapper[5107]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 26 00:10:46 crc kubenswrapper[5107]: --loglevel="${LOGLEVEL}" Jan 26 00:10:46 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:46 crc kubenswrapper[5107]: > logger="UnhandledError" Jan 26 00:10:46 crc kubenswrapper[5107]: E0126 00:10:46.116792 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:46 crc kubenswrapper[5107]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 26 00:10:46 crc kubenswrapper[5107]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 26 00:10:46 crc kubenswrapper[5107]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-75l2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-f2mpq_openshift-multus(2e5342d5-2d0c-458d-94b7-25c802ce298a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:46 crc kubenswrapper[5107]: > logger="UnhandledError" Jan 26 00:10:46 crc kubenswrapper[5107]: E0126 00:10:46.117802 5107 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 26 00:10:46 crc kubenswrapper[5107]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 26 00:10:46 crc kubenswrapper[5107]: if [[ -f "/env/_master" ]]; then Jan 26 00:10:46 crc kubenswrapper[5107]: set -o allexport Jan 26 00:10:46 crc kubenswrapper[5107]: source "/env/_master" Jan 26 00:10:46 crc kubenswrapper[5107]: set +o allexport Jan 26 00:10:46 crc kubenswrapper[5107]: fi Jan 26 00:10:46 crc kubenswrapper[5107]: Jan 26 00:10:46 crc kubenswrapper[5107]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 26 00:10:46 crc kubenswrapper[5107]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 26 00:10:46 crc kubenswrapper[5107]: --disable-webhook \ Jan 26 00:10:46 crc kubenswrapper[5107]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 26 00:10:46 crc kubenswrapper[5107]: --loglevel="${LOGLEVEL}" Jan 26 00:10:46 crc kubenswrapper[5107]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 26 00:10:46 crc kubenswrapper[5107]: > logger="UnhandledError" Jan 26 00:10:46 crc kubenswrapper[5107]: E0126 00:10:46.117929 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-f2mpq" podUID="2e5342d5-2d0c-458d-94b7-25c802ce298a" Jan 26 00:10:46 crc kubenswrapper[5107]: E0126 00:10:46.119112 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.128417 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd7339a-991d-4b65-8e8c-d3b049e9fa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be5bcbd76c10288ba86ec209af691e631a5c24d4f596b8b2a22be27a2e5b6026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.144188 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.157940 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.178604 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d12cfb26-8718-4def-8f36-c7eaa12bc463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nvznv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.193313 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-f2mpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e5342d5-2d0c-458d-94b7-25c802ce298a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75l2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f2mpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.209951 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"870e1d1f-d5de-4cb0-afd3-e32ee3e21ad9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1c2645c2d7f91e355504de88c19902bd7091a30b8fb1e6bffe3bd643d9ae87e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6cd1e8970c9bd97f174884ae8760b3f67982935515109cac7fc2423d03e2cdc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c4c9a487362af2080d699cfb3c25b37fee4ea7ee71fe4c120513c8a93e345bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.219377 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.219448 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.219476 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.219517 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.219548 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:46Z","lastTransitionTime":"2026-01-26T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.228071 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:10:46 crc kubenswrapper[5107]: E0126 00:10:46.228639 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:02.228591235 +0000 UTC m=+107.146185601 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.245402 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b926ca7e-55ee-4b84-a5c2-3eea448cf9c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://80635fc424a05f12a9bb60d0ceb42d4a25d7bbc065e69e32316354bfa3c1c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2986b22cb8ac794f2297b2bb06e60e4f85638acb9c56a9ccf8a86e5d42ae8251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://976d360799dd0382ad10776370c3db39c364353d2a4c9ffdd339503160e251db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1d272e8f9f86eb13c31a8613165562354adc102c6c7674464a48f4c72fc4a3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3d24145582663d59a87810736c9cba433c006ed3baf7391cf09c2341c5e6b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.265132 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.283413 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.297380 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p96sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6f097f-b642-4bc7-ae13-b78dad78b73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ptwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p96sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.308774 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wbn74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e0e338-0636-411c-ac3c-9972beecf25b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnj62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wbn74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.322274 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.322318 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.322334 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.322353 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.322366 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:46Z","lastTransitionTime":"2026-01-26T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.331492 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"504c44df-fe93-44f1-bab1-0ea8b1eb3980\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://47fa690b41b05a971d8e2d25a105b0c873282b4794f352165354120564685e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c1d676a79dd2425942bd62e4d423f98509d8fbdce526ec4174c8f201faab13c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e6a9e0e1088ec6d6c55e9c40410af1e160ce01e045855d38afe83fae0f283ad1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8771a49a10f3f3e07f25647aa9c52ba74dae813bb12b4e2d0f80e6996482bd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce459ad004254e8afec72b815e731aa25828326ffe317a8dd4ac064ffc744fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"message\\\":\\\"o:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0126 00:10:04.015346 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 00:10:04.015512 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0126 00:10:04.016590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-910484192/tls.crt::/tmp/serving-cert-910484192/tls.key\\\\\\\"\\\\nI0126 00:10:04.772936 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:10:04.776564 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:10:04.777520 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:10:04.777595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:10:04.777608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:10:04.782656 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 00:10:04.782703 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:10:04.782726 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:04.782734 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:04.782738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:10:04.782742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:10:04.782745 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:10:04.782749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:10:04.785948 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:10:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:30Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://77afb4ec3e1993d3627dfd57b2c724e127e0b709358c469f86fe32abae3a75a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.352865 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b63ba3-1e0f-4fc0-8c1f-0c667403148c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a32faca2b6b353b711ddefefc6c8849adfa0a7790893f7c1faa5a3f9d703fddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://322b4a3e2a376c541682895450ed098e45acabe88d84fda4adbc15c56d32ab5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ab1303ede901dbcb4161028c6937d2b8c3d5c9bed4e1b0e53f56f5f2d84ac85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e262e6f9f8205c48c94e191de6b6732c6294e9f794db6f66c90b561ec016e455\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.368512 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.383012 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.404037 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4vppd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e3191d-a6c4-4983-aa24-9f03af38c82b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4vppd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.418976 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d907601-1852-43f9-8a70-ef4e71351e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94c4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.425527 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.425647 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.425682 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.425719 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.425744 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:46Z","lastTransitionTime":"2026-01-26T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.436138 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdn4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdn4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.454782 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec13f4fa-c252-4f6a-9a31-43f70366ae48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm2qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm2qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-kcwjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.529492 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.529915 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.530104 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.530355 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.530556 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:46Z","lastTransitionTime":"2026-01-26T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.633048 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.633483 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.633799 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.635328 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.635377 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:46Z","lastTransitionTime":"2026-01-26T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.737834 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.737988 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.738018 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.738062 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.738098 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:46Z","lastTransitionTime":"2026-01-26T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.840970 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.841029 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.841043 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.841060 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.841075 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:46Z","lastTransitionTime":"2026-01-26T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.943931 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.943968 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.943978 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.943994 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:46 crc kubenswrapper[5107]: I0126 00:10:46.944005 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:46Z","lastTransitionTime":"2026-01-26T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.046584 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.046651 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.046664 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.046685 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.046698 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:47Z","lastTransitionTime":"2026-01-26T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.113019 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:47 crc kubenswrapper[5107]: E0126 00:10:47.113307 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.113373 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:47 crc kubenswrapper[5107]: E0126 00:10:47.113581 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.150064 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.150152 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.150171 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.150198 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.150221 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:47Z","lastTransitionTime":"2026-01-26T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.254742 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.254832 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.254852 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.254920 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.254952 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:47Z","lastTransitionTime":"2026-01-26T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.358984 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.359048 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.359066 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.359091 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.359109 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:47Z","lastTransitionTime":"2026-01-26T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.462872 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.462958 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.462973 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.462995 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.463009 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:47Z","lastTransitionTime":"2026-01-26T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.566729 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.566827 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.566849 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.566873 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.566929 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:47Z","lastTransitionTime":"2026-01-26T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.669331 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.669411 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.669436 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.669469 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.669493 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:47Z","lastTransitionTime":"2026-01-26T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.772929 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.772999 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.773023 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.773047 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.773063 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:47Z","lastTransitionTime":"2026-01-26T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.875596 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.875684 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.875703 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.875723 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.875738 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:47Z","lastTransitionTime":"2026-01-26T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.979276 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.979345 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.979374 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.979398 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:47 crc kubenswrapper[5107]: I0126 00:10:47.979413 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:47Z","lastTransitionTime":"2026-01-26T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.081964 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.082003 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.082013 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.082029 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.082039 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:48Z","lastTransitionTime":"2026-01-26T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.116776 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:48 crc kubenswrapper[5107]: E0126 00:10:48.116949 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.117092 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:10:48 crc kubenswrapper[5107]: E0126 00:10:48.117298 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.184583 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.184655 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.184677 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.184706 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.184729 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:48Z","lastTransitionTime":"2026-01-26T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.287638 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.287683 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.287696 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.287713 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.287726 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:48Z","lastTransitionTime":"2026-01-26T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.390474 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.390545 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.390571 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.390599 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.390622 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:48Z","lastTransitionTime":"2026-01-26T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.493973 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.494085 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.494107 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.494134 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.494158 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:48Z","lastTransitionTime":"2026-01-26T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.596804 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.596855 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.596867 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.596904 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.596916 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:48Z","lastTransitionTime":"2026-01-26T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.699811 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.699968 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.699991 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.700030 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.700050 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:48Z","lastTransitionTime":"2026-01-26T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.803094 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.803193 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.803219 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.803250 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.803275 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:48Z","lastTransitionTime":"2026-01-26T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.905993 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.906068 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.906080 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.906104 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:48 crc kubenswrapper[5107]: I0126 00:10:48.906117 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:48Z","lastTransitionTime":"2026-01-26T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.009131 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.009267 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.009290 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.009357 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.009377 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:49Z","lastTransitionTime":"2026-01-26T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.112023 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.112037 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.112619 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.112676 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:49 crc kubenswrapper[5107]: E0126 00:10:49.112678 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.112702 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.112738 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.112763 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:49Z","lastTransitionTime":"2026-01-26T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:49 crc kubenswrapper[5107]: E0126 00:10:49.112963 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.215405 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.215476 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.215494 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.215517 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.215604 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:49Z","lastTransitionTime":"2026-01-26T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.318418 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.318488 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.318514 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.318544 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.318570 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:49Z","lastTransitionTime":"2026-01-26T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.421370 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.421429 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.421447 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.421474 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.421497 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:49Z","lastTransitionTime":"2026-01-26T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.425488 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.425529 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.425547 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.425568 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.425582 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:49Z","lastTransitionTime":"2026-01-26T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:49 crc kubenswrapper[5107]: E0126 00:10:49.436869 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"066ffcb3-e507-457f-8c26-3fe6d538369f\\\",\\\"systemUUID\\\":\\\"d9c41fe3-854d-4f0f-b42d-bfcf817b111c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.440899 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.440932 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.440941 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.440955 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.440968 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:49Z","lastTransitionTime":"2026-01-26T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:49 crc kubenswrapper[5107]: E0126 00:10:49.456412 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"066ffcb3-e507-457f-8c26-3fe6d538369f\\\",\\\"systemUUID\\\":\\\"d9c41fe3-854d-4f0f-b42d-bfcf817b111c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.461483 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.461541 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.461554 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.461570 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.461581 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:49Z","lastTransitionTime":"2026-01-26T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:49 crc kubenswrapper[5107]: E0126 00:10:49.485444 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"066ffcb3-e507-457f-8c26-3fe6d538369f\\\",\\\"systemUUID\\\":\\\"d9c41fe3-854d-4f0f-b42d-bfcf817b111c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.490011 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.490058 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.490067 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.490084 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.490134 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:49Z","lastTransitionTime":"2026-01-26T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:49 crc kubenswrapper[5107]: E0126 00:10:49.511231 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"066ffcb3-e507-457f-8c26-3fe6d538369f\\\",\\\"systemUUID\\\":\\\"d9c41fe3-854d-4f0f-b42d-bfcf817b111c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.515808 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.515930 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.515958 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.515989 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.516013 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:49Z","lastTransitionTime":"2026-01-26T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:49 crc kubenswrapper[5107]: E0126 00:10:49.527729 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"066ffcb3-e507-457f-8c26-3fe6d538369f\\\",\\\"systemUUID\\\":\\\"d9c41fe3-854d-4f0f-b42d-bfcf817b111c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:49 crc kubenswrapper[5107]: E0126 00:10:49.527989 5107 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.530131 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.530173 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.530187 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.530206 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.530222 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:49Z","lastTransitionTime":"2026-01-26T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.633690 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.633795 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.633812 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.633843 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.633861 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:49Z","lastTransitionTime":"2026-01-26T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.737290 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.737567 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.737578 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.737593 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.737604 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:49Z","lastTransitionTime":"2026-01-26T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.840489 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.840552 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.840566 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.840583 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.840955 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:49Z","lastTransitionTime":"2026-01-26T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.943653 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.943707 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.943718 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.943737 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:49 crc kubenswrapper[5107]: I0126 00:10:49.943750 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:49Z","lastTransitionTime":"2026-01-26T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.046519 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.046560 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.046569 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.046582 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.046592 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:50Z","lastTransitionTime":"2026-01-26T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.113178 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:10:50 crc kubenswrapper[5107]: E0126 00:10:50.113353 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.113415 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:50 crc kubenswrapper[5107]: E0126 00:10:50.113595 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.149225 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.149262 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.149273 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.149288 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.149299 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:50Z","lastTransitionTime":"2026-01-26T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.252263 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.253080 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.253345 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.253547 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.253750 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:50Z","lastTransitionTime":"2026-01-26T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.356066 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.356399 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.356539 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.356673 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.356795 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:50Z","lastTransitionTime":"2026-01-26T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.459481 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.459533 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.459549 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.459570 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.459583 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:50Z","lastTransitionTime":"2026-01-26T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.562141 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.562564 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.562721 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.562851 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.563098 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:50Z","lastTransitionTime":"2026-01-26T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.666194 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.666266 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.666289 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.666312 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.666330 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:50Z","lastTransitionTime":"2026-01-26T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.769182 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.769230 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.769243 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.769259 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.769272 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:50Z","lastTransitionTime":"2026-01-26T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.871964 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.872014 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.872024 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.872039 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.872048 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:50Z","lastTransitionTime":"2026-01-26T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.973792 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.974084 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.974180 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.974263 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:50 crc kubenswrapper[5107]: I0126 00:10:50.974356 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:50Z","lastTransitionTime":"2026-01-26T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.076829 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.076869 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.076880 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.076911 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.076922 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:51Z","lastTransitionTime":"2026-01-26T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.112962 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:51 crc kubenswrapper[5107]: E0126 00:10:51.113341 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.112962 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:51 crc kubenswrapper[5107]: E0126 00:10:51.113568 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.179674 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.179710 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.179720 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.179734 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.179745 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:51Z","lastTransitionTime":"2026-01-26T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.282212 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.282278 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.282298 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.282326 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.282347 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:51Z","lastTransitionTime":"2026-01-26T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.305409 5107 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.384751 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.384822 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.384835 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.384855 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.384869 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:51Z","lastTransitionTime":"2026-01-26T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.488051 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.488100 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.488114 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.488130 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.488144 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:51Z","lastTransitionTime":"2026-01-26T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.590233 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.590277 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.590290 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.590306 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.590319 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:51Z","lastTransitionTime":"2026-01-26T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.692800 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.692877 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.692949 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.692980 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.693001 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:51Z","lastTransitionTime":"2026-01-26T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.795329 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.795409 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.795438 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.795472 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.795494 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:51Z","lastTransitionTime":"2026-01-26T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.897785 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.897867 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.897926 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.897958 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:51 crc kubenswrapper[5107]: I0126 00:10:51.897983 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:51Z","lastTransitionTime":"2026-01-26T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.001612 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.001669 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.001686 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.001710 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.001729 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:52Z","lastTransitionTime":"2026-01-26T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.103986 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.104055 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.104073 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.104097 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.104116 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:52Z","lastTransitionTime":"2026-01-26T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.113162 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:10:52 crc kubenswrapper[5107]: E0126 00:10:52.113323 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.113410 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:52 crc kubenswrapper[5107]: E0126 00:10:52.113551 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.206466 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.206556 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.206574 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.206602 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.206621 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:52Z","lastTransitionTime":"2026-01-26T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.308800 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.308915 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.308945 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.308977 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.309000 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:52Z","lastTransitionTime":"2026-01-26T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.411134 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.411183 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.411196 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.411211 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.411223 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:52Z","lastTransitionTime":"2026-01-26T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.514144 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.514206 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.514224 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.514247 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.514259 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:52Z","lastTransitionTime":"2026-01-26T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.616415 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.616468 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.616483 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.616503 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.616516 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:52Z","lastTransitionTime":"2026-01-26T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.718791 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.718851 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.718870 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.718921 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.718941 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:52Z","lastTransitionTime":"2026-01-26T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.821433 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.821515 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.821533 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.821559 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.821576 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:52Z","lastTransitionTime":"2026-01-26T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.924257 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.924316 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.924331 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.924348 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:52 crc kubenswrapper[5107]: I0126 00:10:52.924362 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:52Z","lastTransitionTime":"2026-01-26T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.029703 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.029936 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.029996 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.030016 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.030032 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:53Z","lastTransitionTime":"2026-01-26T00:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.112796 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:53 crc kubenswrapper[5107]: E0126 00:10:53.112955 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.112980 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:53 crc kubenswrapper[5107]: E0126 00:10:53.113046 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.131474 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.131522 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.131533 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.131547 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.131557 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:53Z","lastTransitionTime":"2026-01-26T00:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.234156 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.234223 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.234243 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.234268 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.234285 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:53Z","lastTransitionTime":"2026-01-26T00:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.264213 5107 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.336166 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.336238 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.336255 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.336284 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.336303 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:53Z","lastTransitionTime":"2026-01-26T00:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.439227 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.439360 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.439387 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.439422 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.439447 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:53Z","lastTransitionTime":"2026-01-26T00:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.541227 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.541272 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.541286 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.541303 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.541316 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:53Z","lastTransitionTime":"2026-01-26T00:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.643223 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.643303 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.643328 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.643361 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.643386 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:53Z","lastTransitionTime":"2026-01-26T00:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.746243 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.746303 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.746315 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.746335 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.746348 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:53Z","lastTransitionTime":"2026-01-26T00:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.848589 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.849056 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.849222 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.849453 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.849609 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:53Z","lastTransitionTime":"2026-01-26T00:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.952683 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.952748 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.952763 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.952783 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:53 crc kubenswrapper[5107]: I0126 00:10:53.952797 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:53Z","lastTransitionTime":"2026-01-26T00:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.055599 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.055672 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.055683 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.055708 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.055721 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:54Z","lastTransitionTime":"2026-01-26T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.113024 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.113057 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:54 crc kubenswrapper[5107]: E0126 00:10:54.113204 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:10:54 crc kubenswrapper[5107]: E0126 00:10:54.113356 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.158841 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.158935 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.158957 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.158980 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.158996 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:54Z","lastTransitionTime":"2026-01-26T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.260853 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.260916 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.260926 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.260942 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.260953 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:54Z","lastTransitionTime":"2026-01-26T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.363269 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.363333 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.363348 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.363366 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.363378 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:54Z","lastTransitionTime":"2026-01-26T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.464936 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.464999 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.465017 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.465189 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.465210 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:54Z","lastTransitionTime":"2026-01-26T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.567646 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.567687 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.567695 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.567707 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.567716 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:54Z","lastTransitionTime":"2026-01-26T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.670115 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.670150 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.670159 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.670171 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.670180 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:54Z","lastTransitionTime":"2026-01-26T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.760535 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" event={"ID":"7d907601-1852-43f9-8a70-ef4e71351e81","Type":"ContainerStarted","Data":"ca4f19a93d78f95010c08dab3e320a9e85ceb7b2686445056274ad10b22600d4"} Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.760585 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" event={"ID":"7d907601-1852-43f9-8a70-ef4e71351e81","Type":"ContainerStarted","Data":"c034d499a3fa7451c5b69f34167ce0e89f56510875068ff8a2d30e2dd29b5599"} Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.771400 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec13f4fa-c252-4f6a-9a31-43f70366ae48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm2qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm2qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-kcwjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.771765 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.771804 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.771815 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.771831 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.771842 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:54Z","lastTransitionTime":"2026-01-26T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.781296 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd7339a-991d-4b65-8e8c-d3b049e9fa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be5bcbd76c10288ba86ec209af691e631a5c24d4f596b8b2a22be27a2e5b6026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.791686 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.803249 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.823099 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d12cfb26-8718-4def-8f36-c7eaa12bc463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nvznv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.835877 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-f2mpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e5342d5-2d0c-458d-94b7-25c802ce298a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75l2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f2mpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.848024 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"870e1d1f-d5de-4cb0-afd3-e32ee3e21ad9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1c2645c2d7f91e355504de88c19902bd7091a30b8fb1e6bffe3bd643d9ae87e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6cd1e8970c9bd97f174884ae8760b3f67982935515109cac7fc2423d03e2cdc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c4c9a487362af2080d699cfb3c25b37fee4ea7ee71fe4c120513c8a93e345bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.869649 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b926ca7e-55ee-4b84-a5c2-3eea448cf9c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://80635fc424a05f12a9bb60d0ceb42d4a25d7bbc065e69e32316354bfa3c1c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2986b22cb8ac794f2297b2bb06e60e4f85638acb9c56a9ccf8a86e5d42ae8251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://976d360799dd0382ad10776370c3db39c364353d2a4c9ffdd339503160e251db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1d272e8f9f86eb13c31a8613165562354adc102c6c7674464a48f4c72fc4a3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3d24145582663d59a87810736c9cba433c006ed3baf7391cf09c2341c5e6b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.880940 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.880994 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.881007 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.881025 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.881038 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:54Z","lastTransitionTime":"2026-01-26T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.883114 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.895708 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.904498 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p96sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6f097f-b642-4bc7-ae13-b78dad78b73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ptwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p96sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.911935 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wbn74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e0e338-0636-411c-ac3c-9972beecf25b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnj62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wbn74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.923811 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"504c44df-fe93-44f1-bab1-0ea8b1eb3980\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://47fa690b41b05a971d8e2d25a105b0c873282b4794f352165354120564685e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c1d676a79dd2425942bd62e4d423f98509d8fbdce526ec4174c8f201faab13c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e6a9e0e1088ec6d6c55e9c40410af1e160ce01e045855d38afe83fae0f283ad1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8771a49a10f3f3e07f25647aa9c52ba74dae813bb12b4e2d0f80e6996482bd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce459ad004254e8afec72b815e731aa25828326ffe317a8dd4ac064ffc744fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"message\\\":\\\"o:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0126 00:10:04.015346 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 00:10:04.015512 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0126 00:10:04.016590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-910484192/tls.crt::/tmp/serving-cert-910484192/tls.key\\\\\\\"\\\\nI0126 00:10:04.772936 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:10:04.776564 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:10:04.777520 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:10:04.777595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:10:04.777608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:10:04.782656 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 00:10:04.782703 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:10:04.782726 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:04.782734 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:04.782738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:10:04.782742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:10:04.782745 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:10:04.782749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:10:04.785948 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:10:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:30Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://77afb4ec3e1993d3627dfd57b2c724e127e0b709358c469f86fe32abae3a75a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.938353 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b63ba3-1e0f-4fc0-8c1f-0c667403148c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a32faca2b6b353b711ddefefc6c8849adfa0a7790893f7c1faa5a3f9d703fddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://322b4a3e2a376c541682895450ed098e45acabe88d84fda4adbc15c56d32ab5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ab1303ede901dbcb4161028c6937d2b8c3d5c9bed4e1b0e53f56f5f2d84ac85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e262e6f9f8205c48c94e191de6b6732c6294e9f794db6f66c90b561ec016e455\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.950015 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.961009 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.972866 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4vppd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e3191d-a6c4-4983-aa24-9f03af38c82b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4vppd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.980988 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d907601-1852-43f9-8a70-ef4e71351e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ca4f19a93d78f95010c08dab3e320a9e85ceb7b2686445056274ad10b22600d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c034d499a3fa7451c5b69f34167ce0e89f56510875068ff8a2d30e2dd29b5599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94c4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.983331 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.983366 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.983376 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.983391 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.983402 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:54Z","lastTransitionTime":"2026-01-26T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:54 crc kubenswrapper[5107]: I0126 00:10:54.988155 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdn4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdn4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.086615 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.087108 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.087133 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.087201 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.087231 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:55Z","lastTransitionTime":"2026-01-26T00:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.112412 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:55 crc kubenswrapper[5107]: E0126 00:10:55.112641 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.112713 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:55 crc kubenswrapper[5107]: E0126 00:10:55.112835 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.189959 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.190013 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.190028 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.190047 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.190059 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:55Z","lastTransitionTime":"2026-01-26T00:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.343101 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.343146 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.343158 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.343176 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.343187 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:55Z","lastTransitionTime":"2026-01-26T00:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.446618 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.446681 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.446692 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.446708 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.446718 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:55Z","lastTransitionTime":"2026-01-26T00:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.549060 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.549100 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.549112 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.549130 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.549143 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:55Z","lastTransitionTime":"2026-01-26T00:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.651965 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.652030 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.652042 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.652065 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.652077 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:55Z","lastTransitionTime":"2026-01-26T00:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.755597 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.755666 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.755681 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.755705 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.755721 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:55Z","lastTransitionTime":"2026-01-26T00:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.857525 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.857573 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.857584 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.857600 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.857612 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:55Z","lastTransitionTime":"2026-01-26T00:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.959674 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.959721 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.959731 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.959755 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:55 crc kubenswrapper[5107]: I0126 00:10:55.959778 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:55Z","lastTransitionTime":"2026-01-26T00:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.062130 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.062257 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.062274 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.062302 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.062315 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:56Z","lastTransitionTime":"2026-01-26T00:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.127300 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.127478 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:56 crc kubenswrapper[5107]: E0126 00:10:56.127666 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:10:56 crc kubenswrapper[5107]: E0126 00:10:56.127945 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.142528 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"504c44df-fe93-44f1-bab1-0ea8b1eb3980\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://47fa690b41b05a971d8e2d25a105b0c873282b4794f352165354120564685e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c1d676a79dd2425942bd62e4d423f98509d8fbdce526ec4174c8f201faab13c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e6a9e0e1088ec6d6c55e9c40410af1e160ce01e045855d38afe83fae0f283ad1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8771a49a10f3f3e07f25647aa9c52ba74dae813bb12b4e2d0f80e6996482bd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce459ad004254e8afec72b815e731aa25828326ffe317a8dd4ac064ffc744fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"message\\\":\\\"o:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0126 00:10:04.015346 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 00:10:04.015512 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0126 00:10:04.016590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-910484192/tls.crt::/tmp/serving-cert-910484192/tls.key\\\\\\\"\\\\nI0126 00:10:04.772936 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:10:04.776564 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:10:04.777520 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:10:04.777595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:10:04.777608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:10:04.782656 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 00:10:04.782703 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:10:04.782726 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:04.782734 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:04.782738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:10:04.782742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:10:04.782745 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:10:04.782749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:10:04.785948 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:10:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:30Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://77afb4ec3e1993d3627dfd57b2c724e127e0b709358c469f86fe32abae3a75a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.156013 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b63ba3-1e0f-4fc0-8c1f-0c667403148c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a32faca2b6b353b711ddefefc6c8849adfa0a7790893f7c1faa5a3f9d703fddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://322b4a3e2a376c541682895450ed098e45acabe88d84fda4adbc15c56d32ab5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ab1303ede901dbcb4161028c6937d2b8c3d5c9bed4e1b0e53f56f5f2d84ac85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e262e6f9f8205c48c94e191de6b6732c6294e9f794db6f66c90b561ec016e455\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.166445 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.166520 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.166540 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.166568 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.166587 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:56Z","lastTransitionTime":"2026-01-26T00:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.177823 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.322597 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.417237 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.417428 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.417456 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.417486 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.417499 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:56Z","lastTransitionTime":"2026-01-26T00:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.421708 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4vppd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e3191d-a6c4-4983-aa24-9f03af38c82b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4vppd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.460841 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d907601-1852-43f9-8a70-ef4e71351e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ca4f19a93d78f95010c08dab3e320a9e85ceb7b2686445056274ad10b22600d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c034d499a3fa7451c5b69f34167ce0e89f56510875068ff8a2d30e2dd29b5599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94c4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.469752 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdn4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdn4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.477634 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec13f4fa-c252-4f6a-9a31-43f70366ae48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm2qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm2qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-kcwjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.520032 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.520062 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.520072 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.520121 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.520132 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:56Z","lastTransitionTime":"2026-01-26T00:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.618789 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd7339a-991d-4b65-8e8c-d3b049e9fa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be5bcbd76c10288ba86ec209af691e631a5c24d4f596b8b2a22be27a2e5b6026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.622509 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.622595 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.622611 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.622631 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.622644 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:56Z","lastTransitionTime":"2026-01-26T00:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.632292 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.685368 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.706446 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d12cfb26-8718-4def-8f36-c7eaa12bc463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nvznv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.725562 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-f2mpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e5342d5-2d0c-458d-94b7-25c802ce298a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75l2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f2mpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.725712 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.725758 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.725771 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.725789 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.725801 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:56Z","lastTransitionTime":"2026-01-26T00:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.736656 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"870e1d1f-d5de-4cb0-afd3-e32ee3e21ad9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1c2645c2d7f91e355504de88c19902bd7091a30b8fb1e6bffe3bd643d9ae87e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6cd1e8970c9bd97f174884ae8760b3f67982935515109cac7fc2423d03e2cdc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c4c9a487362af2080d699cfb3c25b37fee4ea7ee71fe4c120513c8a93e345bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.756993 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b926ca7e-55ee-4b84-a5c2-3eea448cf9c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://80635fc424a05f12a9bb60d0ceb42d4a25d7bbc065e69e32316354bfa3c1c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2986b22cb8ac794f2297b2bb06e60e4f85638acb9c56a9ccf8a86e5d42ae8251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://976d360799dd0382ad10776370c3db39c364353d2a4c9ffdd339503160e251db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1d272e8f9f86eb13c31a8613165562354adc102c6c7674464a48f4c72fc4a3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3d24145582663d59a87810736c9cba433c006ed3baf7391cf09c2341c5e6b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.768660 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wbn74" event={"ID":"65e0e338-0636-411c-ac3c-9972beecf25b","Type":"ContainerStarted","Data":"3e773cc031c5cfc52c9d7562ce61b7900513a4c4d68f5d2bdc0af7b6bb951ea3"} Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.770846 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.786554 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.796547 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p96sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6f097f-b642-4bc7-ae13-b78dad78b73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ptwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p96sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.804766 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wbn74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e0e338-0636-411c-ac3c-9972beecf25b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnj62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wbn74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.814215 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p96sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6f097f-b642-4bc7-ae13-b78dad78b73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ptwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p96sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.828412 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wbn74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e0e338-0636-411c-ac3c-9972beecf25b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://3e773cc031c5cfc52c9d7562ce61b7900513a4c4d68f5d2bdc0af7b6bb951ea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnj62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wbn74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.830977 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.831081 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.831100 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.831123 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.831139 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:56Z","lastTransitionTime":"2026-01-26T00:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.843619 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"504c44df-fe93-44f1-bab1-0ea8b1eb3980\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://47fa690b41b05a971d8e2d25a105b0c873282b4794f352165354120564685e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c1d676a79dd2425942bd62e4d423f98509d8fbdce526ec4174c8f201faab13c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e6a9e0e1088ec6d6c55e9c40410af1e160ce01e045855d38afe83fae0f283ad1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8771a49a10f3f3e07f25647aa9c52ba74dae813bb12b4e2d0f80e6996482bd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce459ad004254e8afec72b815e731aa25828326ffe317a8dd4ac064ffc744fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"message\\\":\\\"o:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0126 00:10:04.015346 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 00:10:04.015512 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0126 00:10:04.016590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-910484192/tls.crt::/tmp/serving-cert-910484192/tls.key\\\\\\\"\\\\nI0126 00:10:04.772936 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:10:04.776564 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:10:04.777520 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:10:04.777595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:10:04.777608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:10:04.782656 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 00:10:04.782703 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:10:04.782726 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:04.782734 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:04.782738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:10:04.782742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:10:04.782745 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:10:04.782749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:10:04.785948 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:10:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:30Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://77afb4ec3e1993d3627dfd57b2c724e127e0b709358c469f86fe32abae3a75a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.887465 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b63ba3-1e0f-4fc0-8c1f-0c667403148c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a32faca2b6b353b711ddefefc6c8849adfa0a7790893f7c1faa5a3f9d703fddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://322b4a3e2a376c541682895450ed098e45acabe88d84fda4adbc15c56d32ab5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ab1303ede901dbcb4161028c6937d2b8c3d5c9bed4e1b0e53f56f5f2d84ac85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e262e6f9f8205c48c94e191de6b6732c6294e9f794db6f66c90b561ec016e455\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.897581 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.906310 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.919116 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4vppd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e3191d-a6c4-4983-aa24-9f03af38c82b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4vppd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.929661 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d907601-1852-43f9-8a70-ef4e71351e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ca4f19a93d78f95010c08dab3e320a9e85ceb7b2686445056274ad10b22600d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c034d499a3fa7451c5b69f34167ce0e89f56510875068ff8a2d30e2dd29b5599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94c4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.934118 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.934157 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.934165 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.934179 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.934191 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:56Z","lastTransitionTime":"2026-01-26T00:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.940043 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdn4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdn4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.949617 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec13f4fa-c252-4f6a-9a31-43f70366ae48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm2qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm2qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-kcwjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.959704 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd7339a-991d-4b65-8e8c-d3b049e9fa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be5bcbd76c10288ba86ec209af691e631a5c24d4f596b8b2a22be27a2e5b6026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.975693 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:56 crc kubenswrapper[5107]: I0126 00:10:56.985961 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.000810 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d12cfb26-8718-4def-8f36-c7eaa12bc463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nvznv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.012248 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-f2mpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e5342d5-2d0c-458d-94b7-25c802ce298a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75l2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f2mpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.025632 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"870e1d1f-d5de-4cb0-afd3-e32ee3e21ad9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1c2645c2d7f91e355504de88c19902bd7091a30b8fb1e6bffe3bd643d9ae87e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6cd1e8970c9bd97f174884ae8760b3f67982935515109cac7fc2423d03e2cdc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c4c9a487362af2080d699cfb3c25b37fee4ea7ee71fe4c120513c8a93e345bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.043507 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.043573 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.043591 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.043617 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.043631 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:57Z","lastTransitionTime":"2026-01-26T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.045254 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b926ca7e-55ee-4b84-a5c2-3eea448cf9c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://80635fc424a05f12a9bb60d0ceb42d4a25d7bbc065e69e32316354bfa3c1c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2986b22cb8ac794f2297b2bb06e60e4f85638acb9c56a9ccf8a86e5d42ae8251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://976d360799dd0382ad10776370c3db39c364353d2a4c9ffdd339503160e251db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1d272e8f9f86eb13c31a8613165562354adc102c6c7674464a48f4c72fc4a3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3d24145582663d59a87810736c9cba433c006ed3baf7391cf09c2341c5e6b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.057764 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.068608 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.112900 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.112962 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:57 crc kubenswrapper[5107]: E0126 00:10:57.113064 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:57 crc kubenswrapper[5107]: E0126 00:10:57.113126 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.145264 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.145320 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.145331 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.145348 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.145359 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:57Z","lastTransitionTime":"2026-01-26T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.247862 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.248356 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.248373 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.248396 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.248413 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:57Z","lastTransitionTime":"2026-01-26T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.350434 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.350513 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.350533 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.350550 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.350564 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:57Z","lastTransitionTime":"2026-01-26T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.453241 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.453294 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.453306 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.453324 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.453336 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:57Z","lastTransitionTime":"2026-01-26T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.556554 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.556625 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.556638 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.556661 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.556674 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:57Z","lastTransitionTime":"2026-01-26T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.661377 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.661445 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.661460 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.661480 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.661493 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:57Z","lastTransitionTime":"2026-01-26T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.763425 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.763481 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.763494 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.763546 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.763559 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:57Z","lastTransitionTime":"2026-01-26T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.775394 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"bc034bdb5dc6473a796abe6eb7c5385f50709b5b8f58f05225af3da831c2eda9"} Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.787226 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wbn74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e0e338-0636-411c-ac3c-9972beecf25b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://3e773cc031c5cfc52c9d7562ce61b7900513a4c4d68f5d2bdc0af7b6bb951ea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnj62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wbn74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.802303 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"504c44df-fe93-44f1-bab1-0ea8b1eb3980\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://47fa690b41b05a971d8e2d25a105b0c873282b4794f352165354120564685e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c1d676a79dd2425942bd62e4d423f98509d8fbdce526ec4174c8f201faab13c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e6a9e0e1088ec6d6c55e9c40410af1e160ce01e045855d38afe83fae0f283ad1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8771a49a10f3f3e07f25647aa9c52ba74dae813bb12b4e2d0f80e6996482bd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce459ad004254e8afec72b815e731aa25828326ffe317a8dd4ac064ffc744fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"message\\\":\\\"o:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0126 00:10:04.015346 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 00:10:04.015512 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0126 00:10:04.016590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-910484192/tls.crt::/tmp/serving-cert-910484192/tls.key\\\\\\\"\\\\nI0126 00:10:04.772936 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:10:04.776564 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:10:04.777520 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:10:04.777595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:10:04.777608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:10:04.782656 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 00:10:04.782703 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:10:04.782726 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:04.782734 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:04.782738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:10:04.782742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:10:04.782745 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:10:04.782749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:10:04.785948 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:10:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:30Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://77afb4ec3e1993d3627dfd57b2c724e127e0b709358c469f86fe32abae3a75a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.815139 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b63ba3-1e0f-4fc0-8c1f-0c667403148c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a32faca2b6b353b711ddefefc6c8849adfa0a7790893f7c1faa5a3f9d703fddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://322b4a3e2a376c541682895450ed098e45acabe88d84fda4adbc15c56d32ab5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ab1303ede901dbcb4161028c6937d2b8c3d5c9bed4e1b0e53f56f5f2d84ac85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e262e6f9f8205c48c94e191de6b6732c6294e9f794db6f66c90b561ec016e455\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.824586 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.835773 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.853417 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4vppd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e3191d-a6c4-4983-aa24-9f03af38c82b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4vppd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.865003 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d907601-1852-43f9-8a70-ef4e71351e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ca4f19a93d78f95010c08dab3e320a9e85ceb7b2686445056274ad10b22600d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c034d499a3fa7451c5b69f34167ce0e89f56510875068ff8a2d30e2dd29b5599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94c4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.866208 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.866247 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.866258 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.866274 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.866285 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:57Z","lastTransitionTime":"2026-01-26T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.874148 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdn4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdn4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.883667 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec13f4fa-c252-4f6a-9a31-43f70366ae48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm2qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm2qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-kcwjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.892006 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd7339a-991d-4b65-8e8c-d3b049e9fa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be5bcbd76c10288ba86ec209af691e631a5c24d4f596b8b2a22be27a2e5b6026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.904309 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bc034bdb5dc6473a796abe6eb7c5385f50709b5b8f58f05225af3da831c2eda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.913063 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.951328 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d12cfb26-8718-4def-8f36-c7eaa12bc463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nvznv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.963112 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-f2mpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e5342d5-2d0c-458d-94b7-25c802ce298a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75l2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f2mpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.969487 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.969535 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.969550 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.969567 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.969580 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:57Z","lastTransitionTime":"2026-01-26T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:57 crc kubenswrapper[5107]: I0126 00:10:57.973693 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"870e1d1f-d5de-4cb0-afd3-e32ee3e21ad9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1c2645c2d7f91e355504de88c19902bd7091a30b8fb1e6bffe3bd643d9ae87e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6cd1e8970c9bd97f174884ae8760b3f67982935515109cac7fc2423d03e2cdc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c4c9a487362af2080d699cfb3c25b37fee4ea7ee71fe4c120513c8a93e345bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.074162 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.074204 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.074213 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.074226 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.074236 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:58Z","lastTransitionTime":"2026-01-26T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.112544 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:10:58 crc kubenswrapper[5107]: E0126 00:10:58.112713 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.113427 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:58 crc kubenswrapper[5107]: E0126 00:10:58.115299 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.154142 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b926ca7e-55ee-4b84-a5c2-3eea448cf9c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://80635fc424a05f12a9bb60d0ceb42d4a25d7bbc065e69e32316354bfa3c1c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2986b22cb8ac794f2297b2bb06e60e4f85638acb9c56a9ccf8a86e5d42ae8251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://976d360799dd0382ad10776370c3db39c364353d2a4c9ffdd339503160e251db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1d272e8f9f86eb13c31a8613165562354adc102c6c7674464a48f4c72fc4a3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3d24145582663d59a87810736c9cba433c006ed3baf7391cf09c2341c5e6b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.187827 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.187898 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.187912 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.187931 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.187941 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:58Z","lastTransitionTime":"2026-01-26T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.199697 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.213835 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.227678 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p96sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6f097f-b642-4bc7-ae13-b78dad78b73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ptwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p96sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.290416 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.290690 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.290961 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.290981 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.290994 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:58Z","lastTransitionTime":"2026-01-26T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.393700 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.394088 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.394263 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.394456 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.394630 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:58Z","lastTransitionTime":"2026-01-26T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.496910 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.496950 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.496960 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.496975 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.496985 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:58Z","lastTransitionTime":"2026-01-26T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.616209 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.616279 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.616294 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.616311 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.616347 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:58Z","lastTransitionTime":"2026-01-26T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.722701 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.722762 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.722776 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.722795 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.722808 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:58Z","lastTransitionTime":"2026-01-26T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.810737 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-p96sx" event={"ID":"4f6f097f-b642-4bc7-ae13-b78dad78b73e","Type":"ContainerStarted","Data":"3321b8024342fefd9badbb4efee28aa081c7385bc7955e2fdb2e3242b9fa1ce1"} Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.825796 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.825843 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.825854 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.825873 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.825902 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:58Z","lastTransitionTime":"2026-01-26T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.931318 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.931351 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.931360 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.931374 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:58 crc kubenswrapper[5107]: I0126 00:10:58.931385 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:58Z","lastTransitionTime":"2026-01-26T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.034941 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.035068 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.035085 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.035109 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.035122 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:59Z","lastTransitionTime":"2026-01-26T00:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.072898 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"504c44df-fe93-44f1-bab1-0ea8b1eb3980\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://47fa690b41b05a971d8e2d25a105b0c873282b4794f352165354120564685e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c1d676a79dd2425942bd62e4d423f98509d8fbdce526ec4174c8f201faab13c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e6a9e0e1088ec6d6c55e9c40410af1e160ce01e045855d38afe83fae0f283ad1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8771a49a10f3f3e07f25647aa9c52ba74dae813bb12b4e2d0f80e6996482bd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce459ad004254e8afec72b815e731aa25828326ffe317a8dd4ac064ffc744fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"message\\\":\\\"o:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0126 00:10:04.015346 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 00:10:04.015512 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0126 00:10:04.016590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-910484192/tls.crt::/tmp/serving-cert-910484192/tls.key\\\\\\\"\\\\nI0126 00:10:04.772936 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:10:04.776564 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:10:04.777520 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:10:04.777595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:10:04.777608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:10:04.782656 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 00:10:04.782703 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:10:04.782726 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:04.782734 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:04.782738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:10:04.782742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:10:04.782745 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:10:04.782749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:10:04.785948 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:10:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:30Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://77afb4ec3e1993d3627dfd57b2c724e127e0b709358c469f86fe32abae3a75a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.103512 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b63ba3-1e0f-4fc0-8c1f-0c667403148c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a32faca2b6b353b711ddefefc6c8849adfa0a7790893f7c1faa5a3f9d703fddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://322b4a3e2a376c541682895450ed098e45acabe88d84fda4adbc15c56d32ab5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ab1303ede901dbcb4161028c6937d2b8c3d5c9bed4e1b0e53f56f5f2d84ac85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e262e6f9f8205c48c94e191de6b6732c6294e9f794db6f66c90b561ec016e455\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.116471 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.117786 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:59 crc kubenswrapper[5107]: E0126 00:10:59.118023 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.118245 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:59 crc kubenswrapper[5107]: E0126 00:10:59.118418 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.138497 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.138552 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.138566 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.138596 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.138610 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:59Z","lastTransitionTime":"2026-01-26T00:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.161400 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.173992 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4vppd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e3191d-a6c4-4983-aa24-9f03af38c82b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4vppd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.185749 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d907601-1852-43f9-8a70-ef4e71351e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ca4f19a93d78f95010c08dab3e320a9e85ceb7b2686445056274ad10b22600d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c034d499a3fa7451c5b69f34167ce0e89f56510875068ff8a2d30e2dd29b5599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94c4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.196869 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdn4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdn4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.232405 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec13f4fa-c252-4f6a-9a31-43f70366ae48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm2qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm2qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-kcwjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.241113 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.241185 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.241199 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.241220 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.241233 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:59Z","lastTransitionTime":"2026-01-26T00:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.276964 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd7339a-991d-4b65-8e8c-d3b049e9fa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be5bcbd76c10288ba86ec209af691e631a5c24d4f596b8b2a22be27a2e5b6026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.291249 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bc034bdb5dc6473a796abe6eb7c5385f50709b5b8f58f05225af3da831c2eda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.303162 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.478286 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.478334 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.478346 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.478362 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.478373 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:59Z","lastTransitionTime":"2026-01-26T00:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.488386 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d12cfb26-8718-4def-8f36-c7eaa12bc463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nvznv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.515704 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-f2mpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e5342d5-2d0c-458d-94b7-25c802ce298a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75l2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f2mpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.542878 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"870e1d1f-d5de-4cb0-afd3-e32ee3e21ad9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1c2645c2d7f91e355504de88c19902bd7091a30b8fb1e6bffe3bd643d9ae87e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6cd1e8970c9bd97f174884ae8760b3f67982935515109cac7fc2423d03e2cdc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c4c9a487362af2080d699cfb3c25b37fee4ea7ee71fe4c120513c8a93e345bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.556597 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.556636 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.556647 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.556664 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.556675 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:59Z","lastTransitionTime":"2026-01-26T00:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:59 crc kubenswrapper[5107]: E0126 00:10:59.569647 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"066ffcb3-e507-457f-8c26-3fe6d538369f\\\",\\\"systemUUID\\\":\\\"d9c41fe3-854d-4f0f-b42d-bfcf817b111c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.573105 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.573134 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.573142 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.573155 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.573164 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:59Z","lastTransitionTime":"2026-01-26T00:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.580933 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b926ca7e-55ee-4b84-a5c2-3eea448cf9c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://80635fc424a05f12a9bb60d0ceb42d4a25d7bbc065e69e32316354bfa3c1c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2986b22cb8ac794f2297b2bb06e60e4f85638acb9c56a9ccf8a86e5d42ae8251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://976d360799dd0382ad10776370c3db39c364353d2a4c9ffdd339503160e251db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1d272e8f9f86eb13c31a8613165562354adc102c6c7674464a48f4c72fc4a3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3d24145582663d59a87810736c9cba433c006ed3baf7391cf09c2341c5e6b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5107]: E0126 00:10:59.583802 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"066ffcb3-e507-457f-8c26-3fe6d538369f\\\",\\\"systemUUID\\\":\\\"d9c41fe3-854d-4f0f-b42d-bfcf817b111c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.587496 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.587560 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.587570 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.587586 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.587597 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:59Z","lastTransitionTime":"2026-01-26T00:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.591919 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5107]: E0126 00:10:59.597231 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"066ffcb3-e507-457f-8c26-3fe6d538369f\\\",\\\"systemUUID\\\":\\\"d9c41fe3-854d-4f0f-b42d-bfcf817b111c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.600216 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.600262 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.600276 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.600295 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.600307 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:59Z","lastTransitionTime":"2026-01-26T00:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.600713 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5107]: E0126 00:10:59.609640 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"066ffcb3-e507-457f-8c26-3fe6d538369f\\\",\\\"systemUUID\\\":\\\"d9c41fe3-854d-4f0f-b42d-bfcf817b111c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.609788 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p96sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6f097f-b642-4bc7-ae13-b78dad78b73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://3321b8024342fefd9badbb4efee28aa081c7385bc7955e2fdb2e3242b9fa1ce1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ptwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p96sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.612285 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.612318 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.612330 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.612348 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.612361 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:59Z","lastTransitionTime":"2026-01-26T00:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.618513 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wbn74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e0e338-0636-411c-ac3c-9972beecf25b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://3e773cc031c5cfc52c9d7562ce61b7900513a4c4d68f5d2bdc0af7b6bb951ea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnj62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wbn74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5107]: E0126 00:10:59.622325 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"066ffcb3-e507-457f-8c26-3fe6d538369f\\\",\\\"systemUUID\\\":\\\"d9c41fe3-854d-4f0f-b42d-bfcf817b111c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5107]: E0126 00:10:59.622504 5107 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.624013 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.624054 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.624078 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.624100 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.624117 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:59Z","lastTransitionTime":"2026-01-26T00:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.727607 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.727695 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.727761 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.727796 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.727811 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:59Z","lastTransitionTime":"2026-01-26T00:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.830057 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.830291 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.830357 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.830424 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.830487 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:59Z","lastTransitionTime":"2026-01-26T00:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.932522 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.932582 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.932594 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.932614 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:59 crc kubenswrapper[5107]: I0126 00:10:59.932629 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:59Z","lastTransitionTime":"2026-01-26T00:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.035012 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.035063 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.035079 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.035102 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.035120 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:00Z","lastTransitionTime":"2026-01-26T00:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.112834 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.112912 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:11:00 crc kubenswrapper[5107]: E0126 00:11:00.113028 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:00 crc kubenswrapper[5107]: E0126 00:11:00.113142 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.137081 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.137131 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.137143 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.137158 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.137169 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:00Z","lastTransitionTime":"2026-01-26T00:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.239963 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.240012 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.240026 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.240045 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.240054 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:00Z","lastTransitionTime":"2026-01-26T00:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.342926 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.343001 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.343026 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.343058 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.343082 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:00Z","lastTransitionTime":"2026-01-26T00:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.445668 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.445750 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.445768 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.445795 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.445812 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:00Z","lastTransitionTime":"2026-01-26T00:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.548691 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.549000 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.549116 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.549216 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.549297 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:00Z","lastTransitionTime":"2026-01-26T00:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.652360 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.652421 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.652434 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.652454 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.652474 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:00Z","lastTransitionTime":"2026-01-26T00:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.755194 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.755479 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.755549 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.755631 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.755706 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:00Z","lastTransitionTime":"2026-01-26T00:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.819118 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" event={"ID":"ec13f4fa-c252-4f6a-9a31-43f70366ae48","Type":"ContainerStarted","Data":"278a16c98dd11167e9a1c7d0851eac90113bcf9aeda2aa7628d1d0ac6ad6ec60"} Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.858654 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.858719 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.858733 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.858754 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.858769 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:00Z","lastTransitionTime":"2026-01-26T00:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.961759 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.962327 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.962389 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.962502 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:00 crc kubenswrapper[5107]: I0126 00:11:00.962580 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:00Z","lastTransitionTime":"2026-01-26T00:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.065988 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.066045 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.066059 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.066075 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.066085 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:01Z","lastTransitionTime":"2026-01-26T00:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.095600 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.095708 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:01 crc kubenswrapper[5107]: E0126 00:11:01.095791 5107 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:11:01 crc kubenswrapper[5107]: E0126 00:11:01.095862 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:33.095845796 +0000 UTC m=+138.013440142 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:11:01 crc kubenswrapper[5107]: E0126 00:11:01.095795 5107 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:11:01 crc kubenswrapper[5107]: E0126 00:11:01.096075 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:33.096059692 +0000 UTC m=+138.013654048 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.112833 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.112851 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:01 crc kubenswrapper[5107]: E0126 00:11:01.113632 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:01 crc kubenswrapper[5107]: E0126 00:11:01.113908 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.168685 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.169174 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.169212 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.169243 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.169263 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:01Z","lastTransitionTime":"2026-01-26T00:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.197141 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.197186 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:01 crc kubenswrapper[5107]: E0126 00:11:01.197748 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:11:01 crc kubenswrapper[5107]: E0126 00:11:01.197801 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:11:01 crc kubenswrapper[5107]: E0126 00:11:01.197826 5107 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:11:01 crc kubenswrapper[5107]: E0126 00:11:01.198055 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:33.198020445 +0000 UTC m=+138.115614791 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:11:01 crc kubenswrapper[5107]: E0126 00:11:01.198705 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:11:01 crc kubenswrapper[5107]: E0126 00:11:01.198731 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:11:01 crc kubenswrapper[5107]: E0126 00:11:01.198753 5107 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:11:01 crc kubenswrapper[5107]: E0126 00:11:01.198801 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:33.198790067 +0000 UTC m=+138.116384413 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.271508 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.271561 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.271572 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.271591 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.271603 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:01Z","lastTransitionTime":"2026-01-26T00:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.394105 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.394168 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.394190 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.394215 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.394234 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:01Z","lastTransitionTime":"2026-01-26T00:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.497088 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.497167 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.497186 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.497214 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.497234 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:01Z","lastTransitionTime":"2026-01-26T00:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.599944 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.600678 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.600748 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.600819 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.600911 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:01Z","lastTransitionTime":"2026-01-26T00:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.703458 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.703510 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.703519 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.703537 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.703550 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:01Z","lastTransitionTime":"2026-01-26T00:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.804840 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.804900 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.804916 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.804938 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.804955 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:01Z","lastTransitionTime":"2026-01-26T00:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.908813 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.908939 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.908977 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.909011 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.909036 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:01Z","lastTransitionTime":"2026-01-26T00:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.917943 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4vppd" event={"ID":"65e3191d-a6c4-4983-aa24-9f03af38c82b","Type":"ContainerStarted","Data":"270a88d97ac6a22dedc13a3fd5fdb2e9c6e6cc365f4bc78d7052d2a3477d7d7b"} Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.923623 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"56f9bca25b34b5ec26600264bd39b9b94ada8f6c5ac70c8fa6f8d7e74817c5a7"} Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.932431 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" event={"ID":"d12cfb26-8718-4def-8f36-c7eaa12bc463","Type":"ContainerStarted","Data":"48490e24f72bfc85170134defa73f6607b7f49b3e04b249cb1993647c0168748"} Jan 26 00:11:01 crc kubenswrapper[5107]: I0126 00:11:01.997716 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b926ca7e-55ee-4b84-a5c2-3eea448cf9c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://80635fc424a05f12a9bb60d0ceb42d4a25d7bbc065e69e32316354bfa3c1c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2986b22cb8ac794f2297b2bb06e60e4f85638acb9c56a9ccf8a86e5d42ae8251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://976d360799dd0382ad10776370c3db39c364353d2a4c9ffdd339503160e251db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1d272e8f9f86eb13c31a8613165562354adc102c6c7674464a48f4c72fc4a3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3d24145582663d59a87810736c9cba433c006ed3baf7391cf09c2341c5e6b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.011376 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.011672 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.011816 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.011937 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.012054 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:02Z","lastTransitionTime":"2026-01-26T00:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.013410 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.030941 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.040312 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p96sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6f097f-b642-4bc7-ae13-b78dad78b73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://3321b8024342fefd9badbb4efee28aa081c7385bc7955e2fdb2e3242b9fa1ce1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ptwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p96sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.049184 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wbn74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e0e338-0636-411c-ac3c-9972beecf25b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://3e773cc031c5cfc52c9d7562ce61b7900513a4c4d68f5d2bdc0af7b6bb951ea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnj62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wbn74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.063501 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"504c44df-fe93-44f1-bab1-0ea8b1eb3980\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://47fa690b41b05a971d8e2d25a105b0c873282b4794f352165354120564685e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c1d676a79dd2425942bd62e4d423f98509d8fbdce526ec4174c8f201faab13c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e6a9e0e1088ec6d6c55e9c40410af1e160ce01e045855d38afe83fae0f283ad1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8771a49a10f3f3e07f25647aa9c52ba74dae813bb12b4e2d0f80e6996482bd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce459ad004254e8afec72b815e731aa25828326ffe317a8dd4ac064ffc744fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"message\\\":\\\"o:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0126 00:10:04.015346 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 00:10:04.015512 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0126 00:10:04.016590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-910484192/tls.crt::/tmp/serving-cert-910484192/tls.key\\\\\\\"\\\\nI0126 00:10:04.772936 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:10:04.776564 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:10:04.777520 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:10:04.777595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:10:04.777608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:10:04.782656 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 00:10:04.782703 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:10:04.782726 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:04.782734 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:04.782738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:10:04.782742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:10:04.782745 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:10:04.782749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:10:04.785948 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:10:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:30Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://77afb4ec3e1993d3627dfd57b2c724e127e0b709358c469f86fe32abae3a75a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.083569 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b63ba3-1e0f-4fc0-8c1f-0c667403148c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a32faca2b6b353b711ddefefc6c8849adfa0a7790893f7c1faa5a3f9d703fddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://322b4a3e2a376c541682895450ed098e45acabe88d84fda4adbc15c56d32ab5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ab1303ede901dbcb4161028c6937d2b8c3d5c9bed4e1b0e53f56f5f2d84ac85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e262e6f9f8205c48c94e191de6b6732c6294e9f794db6f66c90b561ec016e455\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.097256 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.108175 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-metrics-certs\") pod \"network-metrics-daemon-bdn4m\" (UID: \"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\") " pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:11:02 crc kubenswrapper[5107]: E0126 00:11:02.108299 5107 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:11:02 crc kubenswrapper[5107]: E0126 00:11:02.108371 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-metrics-certs podName:93b5402e-3f3e-4e3b-8cf4-f919871d0c86 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:34.108352973 +0000 UTC m=+139.025947319 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-metrics-certs") pod "network-metrics-daemon-bdn4m" (UID: "93b5402e-3f3e-4e3b-8cf4-f919871d0c86") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.110807 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.111985 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:02 crc kubenswrapper[5107]: E0126 00:11:02.112116 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.111989 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:11:02 crc kubenswrapper[5107]: E0126 00:11:02.112217 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.114687 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.114715 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.114728 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.114745 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.114758 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:02Z","lastTransitionTime":"2026-01-26T00:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.123296 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4vppd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e3191d-a6c4-4983-aa24-9f03af38c82b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://270a88d97ac6a22dedc13a3fd5fdb2e9c6e6cc365f4bc78d7052d2a3477d7d7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:11:01Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4vppd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.132762 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d907601-1852-43f9-8a70-ef4e71351e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ca4f19a93d78f95010c08dab3e320a9e85ceb7b2686445056274ad10b22600d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c034d499a3fa7451c5b69f34167ce0e89f56510875068ff8a2d30e2dd29b5599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94c4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.142144 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdn4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdn4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.152668 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec13f4fa-c252-4f6a-9a31-43f70366ae48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm2qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm2qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-kcwjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.165506 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd7339a-991d-4b65-8e8c-d3b049e9fa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be5bcbd76c10288ba86ec209af691e631a5c24d4f596b8b2a22be27a2e5b6026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.182799 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bc034bdb5dc6473a796abe6eb7c5385f50709b5b8f58f05225af3da831c2eda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.198533 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.218005 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.218390 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.218498 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.218592 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.218735 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:02Z","lastTransitionTime":"2026-01-26T00:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.222555 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d12cfb26-8718-4def-8f36-c7eaa12bc463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nvznv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.243127 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-f2mpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e5342d5-2d0c-458d-94b7-25c802ce298a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75l2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f2mpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.257235 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"870e1d1f-d5de-4cb0-afd3-e32ee3e21ad9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1c2645c2d7f91e355504de88c19902bd7091a30b8fb1e6bffe3bd643d9ae87e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6cd1e8970c9bd97f174884ae8760b3f67982935515109cac7fc2423d03e2cdc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c4c9a487362af2080d699cfb3c25b37fee4ea7ee71fe4c120513c8a93e345bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.310600 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:02 crc kubenswrapper[5107]: E0126 00:11:02.310935 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:34.310908452 +0000 UTC m=+139.228502798 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.325395 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.325475 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.325492 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.325519 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.325538 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:02Z","lastTransitionTime":"2026-01-26T00:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.428711 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.428841 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.428922 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.428981 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.429026 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:02Z","lastTransitionTime":"2026-01-26T00:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.532114 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.532168 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.532191 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.532212 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.532223 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:02Z","lastTransitionTime":"2026-01-26T00:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.660224 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.660282 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.660296 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.660321 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.660338 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:02Z","lastTransitionTime":"2026-01-26T00:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.809020 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.809066 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.809079 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.809100 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.809113 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:02Z","lastTransitionTime":"2026-01-26T00:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.912733 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.912775 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.912784 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.912797 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.912806 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:02Z","lastTransitionTime":"2026-01-26T00:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.936438 5107 generic.go:358] "Generic (PLEG): container finished" podID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerID="48490e24f72bfc85170134defa73f6607b7f49b3e04b249cb1993647c0168748" exitCode=0 Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.936544 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" event={"ID":"d12cfb26-8718-4def-8f36-c7eaa12bc463","Type":"ContainerDied","Data":"48490e24f72bfc85170134defa73f6607b7f49b3e04b249cb1993647c0168748"} Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.939828 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"6f62555706454023775de0dad54b584d94a61a139f635f0b046e261f890c2dda"} Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.947682 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bdn4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmtjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bdn4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.957931 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec13f4fa-c252-4f6a-9a31-43f70366ae48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm2qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm2qk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-kcwjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.968058 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd7339a-991d-4b65-8e8c-d3b049e9fa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be5bcbd76c10288ba86ec209af691e631a5c24d4f596b8b2a22be27a2e5b6026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66261f161454fe77fe91d953cb28bc4a8ff0280d9efd05d4e70e51219879c1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.980214 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bc034bdb5dc6473a796abe6eb7c5385f50709b5b8f58f05225af3da831c2eda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:02 crc kubenswrapper[5107]: I0126 00:11:02.990084 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.005853 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d12cfb26-8718-4def-8f36-c7eaa12bc463\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48490e24f72bfc85170134defa73f6607b7f49b3e04b249cb1993647c0168748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48490e24f72bfc85170134defa73f6607b7f49b3e04b249cb1993647c0168748\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:11:01Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bm9q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nvznv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.015300 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.015336 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.015345 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.015359 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.015369 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:03Z","lastTransitionTime":"2026-01-26T00:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.018589 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-f2mpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e5342d5-2d0c-458d-94b7-25c802ce298a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-75l2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f2mpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.029796 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"870e1d1f-d5de-4cb0-afd3-e32ee3e21ad9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1c2645c2d7f91e355504de88c19902bd7091a30b8fb1e6bffe3bd643d9ae87e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6cd1e8970c9bd97f174884ae8760b3f67982935515109cac7fc2423d03e2cdc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c4c9a487362af2080d699cfb3c25b37fee4ea7ee71fe4c120513c8a93e345bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fb217bcfe9aa467ac71b42c179decfdafc5c72d7f016d31dfa2887695175d71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.059051 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b926ca7e-55ee-4b84-a5c2-3eea448cf9c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://80635fc424a05f12a9bb60d0ceb42d4a25d7bbc065e69e32316354bfa3c1c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://2986b22cb8ac794f2297b2bb06e60e4f85638acb9c56a9ccf8a86e5d42ae8251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://976d360799dd0382ad10776370c3db39c364353d2a4c9ffdd339503160e251db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1d272e8f9f86eb13c31a8613165562354adc102c6c7674464a48f4c72fc4a3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3d24145582663d59a87810736c9cba433c006ed3baf7391cf09c2341c5e6b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef12b382c5307a78f01ef314d3e75d72d206f0dfd25ee9a60e0168d7820991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6251fd6a7dd5377dd9c109521cf900e31207540ed15ef5bf9592c4c345a40a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce3e8da27390e58bcf61b1f7676cc8cabdc3a54e0cd5d75796309f6044def15e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.071902 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.085549 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.095667 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p96sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6f097f-b642-4bc7-ae13-b78dad78b73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://3321b8024342fefd9badbb4efee28aa081c7385bc7955e2fdb2e3242b9fa1ce1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:57Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ptwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p96sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.104475 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wbn74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e0e338-0636-411c-ac3c-9972beecf25b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://3e773cc031c5cfc52c9d7562ce61b7900513a4c4d68f5d2bdc0af7b6bb951ea3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnj62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wbn74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.112913 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.113193 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:03 crc kubenswrapper[5107]: E0126 00:11:03.113518 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:03 crc kubenswrapper[5107]: E0126 00:11:03.114069 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.119431 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"504c44df-fe93-44f1-bab1-0ea8b1eb3980\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://47fa690b41b05a971d8e2d25a105b0c873282b4794f352165354120564685e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c1d676a79dd2425942bd62e4d423f98509d8fbdce526ec4174c8f201faab13c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e6a9e0e1088ec6d6c55e9c40410af1e160ce01e045855d38afe83fae0f283ad1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8771a49a10f3f3e07f25647aa9c52ba74dae813bb12b4e2d0f80e6996482bd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce459ad004254e8afec72b815e731aa25828326ffe317a8dd4ac064ffc744fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:10:05Z\\\",\\\"message\\\":\\\"o:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0126 00:10:04.015346 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0126 00:10:04.015512 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0126 00:10:04.016590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-910484192/tls.crt::/tmp/serving-cert-910484192/tls.key\\\\\\\"\\\\nI0126 00:10:04.772936 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:10:04.776564 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:10:04.777520 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:10:04.777595 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:10:04.777608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:10:04.782656 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 00:10:04.782703 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:10:04.782726 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:04.782734 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:10:04.782738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:10:04.782742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:10:04.782745 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:10:04.782749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:10:04.785948 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:10:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:30Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://77afb4ec3e1993d3627dfd57b2c724e127e0b709358c469f86fe32abae3a75a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:09:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.119909 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.119944 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.119955 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.119994 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.120011 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:03Z","lastTransitionTime":"2026-01-26T00:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.134441 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b63ba3-1e0f-4fc0-8c1f-0c667403148c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a32faca2b6b353b711ddefefc6c8849adfa0a7790893f7c1faa5a3f9d703fddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://322b4a3e2a376c541682895450ed098e45acabe88d84fda4adbc15c56d32ab5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ab1303ede901dbcb4161028c6937d2b8c3d5c9bed4e1b0e53f56f5f2d84ac85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e262e6f9f8205c48c94e191de6b6732c6294e9f794db6f66c90b561ec016e455\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:09:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:09:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.147585 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.161544 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.175293 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4vppd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65e3191d-a6c4-4983-aa24-9f03af38c82b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://270a88d97ac6a22dedc13a3fd5fdb2e9c6e6cc365f4bc78d7052d2a3477d7d7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:11:01Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb77l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4vppd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.187717 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d907601-1852-43f9-8a70-ef4e71351e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ca4f19a93d78f95010c08dab3e320a9e85ceb7b2686445056274ad10b22600d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c034d499a3fa7451c5b69f34167ce0e89f56510875068ff8a2d30e2dd29b5599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:10:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzkl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:10:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-94c4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.223067 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.223467 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.223614 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.223756 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.223875 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:03Z","lastTransitionTime":"2026-01-26T00:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.326071 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.326124 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.326135 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.326152 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.326163 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:03Z","lastTransitionTime":"2026-01-26T00:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.432766 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.432866 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.432906 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.432932 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.432948 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:03Z","lastTransitionTime":"2026-01-26T00:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.581979 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.582063 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.582078 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.582103 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.582122 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:03Z","lastTransitionTime":"2026-01-26T00:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.701863 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.701942 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.701962 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.701994 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.702007 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:03Z","lastTransitionTime":"2026-01-26T00:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.804215 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.804273 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.804285 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.804302 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.804313 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:03Z","lastTransitionTime":"2026-01-26T00:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.907103 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.907155 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.907164 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.907180 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:03 crc kubenswrapper[5107]: I0126 00:11:03.907191 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:03Z","lastTransitionTime":"2026-01-26T00:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.009798 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.009844 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.009857 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.009874 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.009903 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:04Z","lastTransitionTime":"2026-01-26T00:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.033226 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-f2mpq" event={"ID":"2e5342d5-2d0c-458d-94b7-25c802ce298a","Type":"ContainerStarted","Data":"0af05be8661681d1cc4310b5d003875b708d55167f48758b301cbd8b2fa6aad8"} Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.112175 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:04 crc kubenswrapper[5107]: E0126 00:11:04.112384 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.112441 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:11:04 crc kubenswrapper[5107]: E0126 00:11:04.112597 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.113386 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.113457 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.113478 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.113507 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.113521 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:04Z","lastTransitionTime":"2026-01-26T00:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.216259 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.216319 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.216329 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.216353 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.216363 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:04Z","lastTransitionTime":"2026-01-26T00:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.321786 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.321862 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.321896 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.321923 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.321937 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:04Z","lastTransitionTime":"2026-01-26T00:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.350005 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=36.349947256 podStartE2EDuration="36.349947256s" podCreationTimestamp="2026-01-26 00:10:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:04.349279738 +0000 UTC m=+109.266874084" watchObservedRunningTime="2026-01-26 00:11:04.349947256 +0000 UTC m=+109.267541592" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.350363 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=36.350357198 podStartE2EDuration="36.350357198s" podCreationTimestamp="2026-01-26 00:10:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:04.258933168 +0000 UTC m=+109.176527514" watchObservedRunningTime="2026-01-26 00:11:04.350357198 +0000 UTC m=+109.267951544" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.426159 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.426582 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.426912 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.427156 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.427368 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:04Z","lastTransitionTime":"2026-01-26T00:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.447472 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-p96sx" podStartSLOduration=84.447445956 podStartE2EDuration="1m24.447445956s" podCreationTimestamp="2026-01-26 00:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:04.447031964 +0000 UTC m=+109.364626330" watchObservedRunningTime="2026-01-26 00:11:04.447445956 +0000 UTC m=+109.365040302" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.476108 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-wbn74" podStartSLOduration=84.476079524 podStartE2EDuration="1m24.476079524s" podCreationTimestamp="2026-01-26 00:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:04.475413265 +0000 UTC m=+109.393007631" watchObservedRunningTime="2026-01-26 00:11:04.476079524 +0000 UTC m=+109.393673880" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.530413 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.531031 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.531143 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.531230 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.531320 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:04Z","lastTransitionTime":"2026-01-26T00:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.533988 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=36.533970959 podStartE2EDuration="36.533970959s" podCreationTimestamp="2026-01-26 00:10:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:04.50281701 +0000 UTC m=+109.420411366" watchObservedRunningTime="2026-01-26 00:11:04.533970959 +0000 UTC m=+109.451565305" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.559307 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=36.559275004 podStartE2EDuration="36.559275004s" podCreationTimestamp="2026-01-26 00:10:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:04.534873434 +0000 UTC m=+109.452467800" watchObservedRunningTime="2026-01-26 00:11:04.559275004 +0000 UTC m=+109.476869360" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.619406 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podStartSLOduration=83.619379 podStartE2EDuration="1m23.619379s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:04.608568049 +0000 UTC m=+109.526162395" watchObservedRunningTime="2026-01-26 00:11:04.619379 +0000 UTC m=+109.536973356" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.634333 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.634393 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.634404 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.634423 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.634436 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:04Z","lastTransitionTime":"2026-01-26T00:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.649213 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=36.649190432 podStartE2EDuration="36.649190432s" podCreationTimestamp="2026-01-26 00:10:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:04.64877024 +0000 UTC m=+109.566364586" watchObservedRunningTime="2026-01-26 00:11:04.649190432 +0000 UTC m=+109.566784778" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.736512 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.736565 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.736596 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.736623 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.736640 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:04Z","lastTransitionTime":"2026-01-26T00:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.839285 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.839715 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.839979 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.840281 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.840435 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:04Z","lastTransitionTime":"2026-01-26T00:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.942983 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.944351 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.944493 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.944617 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:04 crc kubenswrapper[5107]: I0126 00:11:04.944777 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:04Z","lastTransitionTime":"2026-01-26T00:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.038265 5107 generic.go:358] "Generic (PLEG): container finished" podID="65e3191d-a6c4-4983-aa24-9f03af38c82b" containerID="270a88d97ac6a22dedc13a3fd5fdb2e9c6e6cc365f4bc78d7052d2a3477d7d7b" exitCode=0 Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.038386 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4vppd" event={"ID":"65e3191d-a6c4-4983-aa24-9f03af38c82b","Type":"ContainerDied","Data":"270a88d97ac6a22dedc13a3fd5fdb2e9c6e6cc365f4bc78d7052d2a3477d7d7b"} Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.040492 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" event={"ID":"ec13f4fa-c252-4f6a-9a31-43f70366ae48","Type":"ContainerStarted","Data":"09cf3f70d300e3ac7e3df79f5dc1360a09542552aab2a9a0f740255d5e671e32"} Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.046273 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.046442 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.046595 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.046731 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.046819 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:05Z","lastTransitionTime":"2026-01-26T00:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.112528 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:05 crc kubenswrapper[5107]: E0126 00:11:05.113011 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.113263 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:05 crc kubenswrapper[5107]: E0126 00:11:05.113375 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.165273 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" podStartSLOduration=84.165249904 podStartE2EDuration="1m24.165249904s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:05.165058039 +0000 UTC m=+110.082652405" watchObservedRunningTime="2026-01-26 00:11:05.165249904 +0000 UTC m=+110.082844250" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.179497 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-f2mpq" podStartSLOduration=84.179473511 podStartE2EDuration="1m24.179473511s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:05.179419779 +0000 UTC m=+110.097014135" watchObservedRunningTime="2026-01-26 00:11:05.179473511 +0000 UTC m=+110.097067877" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.189914 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.189974 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.189985 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.190003 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.190016 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:05Z","lastTransitionTime":"2026-01-26T00:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.293437 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.293508 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.293523 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.293547 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.293562 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:05Z","lastTransitionTime":"2026-01-26T00:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.423533 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.423587 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.423597 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.423615 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.423625 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:05Z","lastTransitionTime":"2026-01-26T00:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.530046 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.530546 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.530561 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.530579 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.530590 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:05Z","lastTransitionTime":"2026-01-26T00:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.633387 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.633425 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.633433 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.633449 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.633458 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:05Z","lastTransitionTime":"2026-01-26T00:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.736297 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.736375 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.736389 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.736408 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.736421 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:05Z","lastTransitionTime":"2026-01-26T00:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.851985 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.852043 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.852054 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.852071 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.852085 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:05Z","lastTransitionTime":"2026-01-26T00:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.954499 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.954534 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.954545 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.954561 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:05 crc kubenswrapper[5107]: I0126 00:11:05.954570 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:05Z","lastTransitionTime":"2026-01-26T00:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.055743 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"d886c7e3f5792dfeb1a00971e1427c66f79512445c672d4c87f89153b14984a0"} Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.056097 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.056150 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.056163 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.056180 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.056191 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:06Z","lastTransitionTime":"2026-01-26T00:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.059216 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" event={"ID":"d12cfb26-8718-4def-8f36-c7eaa12bc463","Type":"ContainerStarted","Data":"de732b9903d7b08c68b9df371201978109d26eaebad3de3ffd9963f118455a26"} Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.113782 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.113878 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:11:06 crc kubenswrapper[5107]: E0126 00:11:06.113973 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:11:06 crc kubenswrapper[5107]: E0126 00:11:06.114006 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.158402 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.158449 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.158460 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.158476 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.158486 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:06Z","lastTransitionTime":"2026-01-26T00:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.261347 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.261395 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.261404 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.261428 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.261438 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:06Z","lastTransitionTime":"2026-01-26T00:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.390997 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.391040 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.391048 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.391065 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.391073 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:06Z","lastTransitionTime":"2026-01-26T00:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.493844 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.493913 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.493927 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.493942 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.493952 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:06Z","lastTransitionTime":"2026-01-26T00:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.612636 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.612707 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.612721 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.612751 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.612774 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:06Z","lastTransitionTime":"2026-01-26T00:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.715906 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.715974 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.715984 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.716004 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.716017 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:06Z","lastTransitionTime":"2026-01-26T00:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.818716 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.818791 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.818804 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.818828 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.818841 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:06Z","lastTransitionTime":"2026-01-26T00:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.921431 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.921487 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.921497 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.921515 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:06 crc kubenswrapper[5107]: I0126 00:11:06.921525 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:06Z","lastTransitionTime":"2026-01-26T00:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.024929 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.024974 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.024987 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.025003 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.025013 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:07Z","lastTransitionTime":"2026-01-26T00:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.064660 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" event={"ID":"d12cfb26-8718-4def-8f36-c7eaa12bc463","Type":"ContainerStarted","Data":"ec04ec9e5194c0682a9a154223e66c1963b4ee0d234f3caa24c0e1901caea55c"} Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.066957 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4vppd" event={"ID":"65e3191d-a6c4-4983-aa24-9f03af38c82b","Type":"ContainerStarted","Data":"6badc4c1c50a4607faea2d4519ed051f0aabe9c1d624ba69b08ea6f0e9773e2c"} Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.113198 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:07 crc kubenswrapper[5107]: E0126 00:11:07.113415 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.113708 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:07 crc kubenswrapper[5107]: E0126 00:11:07.113769 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.129029 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.129096 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.129107 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.129132 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.129144 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:07Z","lastTransitionTime":"2026-01-26T00:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.239531 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.239588 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.239597 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.239612 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.239621 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:07Z","lastTransitionTime":"2026-01-26T00:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.423547 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.423960 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.424043 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.424127 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.424229 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:07Z","lastTransitionTime":"2026-01-26T00:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.584420 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.584467 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.584480 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.584498 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.584510 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:07Z","lastTransitionTime":"2026-01-26T00:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.789291 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.789330 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.789339 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.789357 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.789368 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:07Z","lastTransitionTime":"2026-01-26T00:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.891510 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.891582 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.891597 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.891623 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.891638 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:07Z","lastTransitionTime":"2026-01-26T00:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.996047 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.996116 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.996174 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.996199 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:07 crc kubenswrapper[5107]: I0126 00:11:07.996218 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:07Z","lastTransitionTime":"2026-01-26T00:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.090370 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" event={"ID":"d12cfb26-8718-4def-8f36-c7eaa12bc463","Type":"ContainerStarted","Data":"37cee4666015f0f68030c5480638195a022b8a11aa1f62a9ad196309182af9e2"} Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.090418 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" event={"ID":"d12cfb26-8718-4def-8f36-c7eaa12bc463","Type":"ContainerStarted","Data":"2e36ae47cf4b659b6fc689c141ea8a385139feeb69d144308493c4bd123dea9c"} Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.114690 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:11:08 crc kubenswrapper[5107]: E0126 00:11:08.114831 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.115350 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:08 crc kubenswrapper[5107]: E0126 00:11:08.115438 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.160547 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.160590 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.160599 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.160612 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.160621 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:08Z","lastTransitionTime":"2026-01-26T00:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.262763 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.262813 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.262827 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.262842 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.262853 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:08Z","lastTransitionTime":"2026-01-26T00:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.373195 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.373253 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.373267 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.373286 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.373299 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:08Z","lastTransitionTime":"2026-01-26T00:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.476321 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.476377 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.476387 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.476416 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.476428 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:08Z","lastTransitionTime":"2026-01-26T00:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.578048 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.578085 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.578094 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.578112 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.578121 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:08Z","lastTransitionTime":"2026-01-26T00:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.686095 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.686146 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.686159 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.686177 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.686193 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:08Z","lastTransitionTime":"2026-01-26T00:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.861991 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.862090 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.862104 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.862124 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.862505 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:08Z","lastTransitionTime":"2026-01-26T00:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.964593 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.964656 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.964670 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.964691 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:08 crc kubenswrapper[5107]: I0126 00:11:08.964707 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:08Z","lastTransitionTime":"2026-01-26T00:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.067156 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.067252 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.067280 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.067311 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.067331 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:09Z","lastTransitionTime":"2026-01-26T00:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.106660 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" event={"ID":"d12cfb26-8718-4def-8f36-c7eaa12bc463","Type":"ContainerStarted","Data":"e7eb56451f4e409d4fa1dfd0c69d38e6d43fe5c4dc0cae8908d364b3dce0e4eb"} Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.112803 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.112806 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:09 crc kubenswrapper[5107]: E0126 00:11:09.112994 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:09 crc kubenswrapper[5107]: E0126 00:11:09.113090 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.169700 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.169754 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.169764 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.169780 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.169791 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:09Z","lastTransitionTime":"2026-01-26T00:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.272322 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.272398 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.272413 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.272436 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.272452 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:09Z","lastTransitionTime":"2026-01-26T00:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.375136 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.375203 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.375214 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.375235 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.375252 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:09Z","lastTransitionTime":"2026-01-26T00:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.478179 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.478258 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.478275 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.478430 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.478448 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:09Z","lastTransitionTime":"2026-01-26T00:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.580957 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.581017 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.581031 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.581051 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.581063 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:09Z","lastTransitionTime":"2026-01-26T00:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.683199 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.683252 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.683264 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.683278 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.683287 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:09Z","lastTransitionTime":"2026-01-26T00:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.698801 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.698873 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.698927 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.698950 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.698972 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:09Z","lastTransitionTime":"2026-01-26T00:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:09 crc kubenswrapper[5107]: I0126 00:11:09.854842 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-wjpsm"] Jan 26 00:11:10 crc kubenswrapper[5107]: I0126 00:11:10.056027 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-wjpsm" Jan 26 00:11:10 crc kubenswrapper[5107]: I0126 00:11:10.061600 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 26 00:11:10 crc kubenswrapper[5107]: I0126 00:11:10.061680 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 26 00:11:10 crc kubenswrapper[5107]: I0126 00:11:10.062051 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 26 00:11:10 crc kubenswrapper[5107]: I0126 00:11:10.062929 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 26 00:11:10 crc kubenswrapper[5107]: I0126 00:11:10.114138 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:10 crc kubenswrapper[5107]: E0126 00:11:10.114307 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:10 crc kubenswrapper[5107]: I0126 00:11:10.114599 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:11:10 crc kubenswrapper[5107]: E0126 00:11:10.114664 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:11:10 crc kubenswrapper[5107]: I0126 00:11:10.127796 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/804eb4cd-3394-488d-bed3-674875393f4e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-wjpsm\" (UID: \"804eb4cd-3394-488d-bed3-674875393f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-wjpsm" Jan 26 00:11:10 crc kubenswrapper[5107]: I0126 00:11:10.127876 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/804eb4cd-3394-488d-bed3-674875393f4e-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-wjpsm\" (UID: \"804eb4cd-3394-488d-bed3-674875393f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-wjpsm" Jan 26 00:11:10 crc kubenswrapper[5107]: I0126 00:11:10.127946 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/804eb4cd-3394-488d-bed3-674875393f4e-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-wjpsm\" (UID: \"804eb4cd-3394-488d-bed3-674875393f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-wjpsm" Jan 26 00:11:10 crc kubenswrapper[5107]: I0126 00:11:10.127965 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/804eb4cd-3394-488d-bed3-674875393f4e-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-wjpsm\" (UID: \"804eb4cd-3394-488d-bed3-674875393f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-wjpsm" Jan 26 00:11:10 crc kubenswrapper[5107]: I0126 00:11:10.127981 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/804eb4cd-3394-488d-bed3-674875393f4e-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-wjpsm\" (UID: \"804eb4cd-3394-488d-bed3-674875393f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-wjpsm" Jan 26 00:11:10 crc kubenswrapper[5107]: I0126 00:11:10.252808 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/804eb4cd-3394-488d-bed3-674875393f4e-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-wjpsm\" (UID: \"804eb4cd-3394-488d-bed3-674875393f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-wjpsm" Jan 26 00:11:10 crc kubenswrapper[5107]: I0126 00:11:10.252854 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/804eb4cd-3394-488d-bed3-674875393f4e-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-wjpsm\" (UID: \"804eb4cd-3394-488d-bed3-674875393f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-wjpsm" Jan 26 00:11:10 crc kubenswrapper[5107]: I0126 00:11:10.252874 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/804eb4cd-3394-488d-bed3-674875393f4e-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-wjpsm\" (UID: \"804eb4cd-3394-488d-bed3-674875393f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-wjpsm" Jan 26 00:11:10 crc kubenswrapper[5107]: I0126 00:11:10.252963 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/804eb4cd-3394-488d-bed3-674875393f4e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-wjpsm\" (UID: \"804eb4cd-3394-488d-bed3-674875393f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-wjpsm" Jan 26 00:11:10 crc kubenswrapper[5107]: I0126 00:11:10.252990 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/804eb4cd-3394-488d-bed3-674875393f4e-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-wjpsm\" (UID: \"804eb4cd-3394-488d-bed3-674875393f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-wjpsm" Jan 26 00:11:10 crc kubenswrapper[5107]: I0126 00:11:10.253960 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/804eb4cd-3394-488d-bed3-674875393f4e-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-wjpsm\" (UID: \"804eb4cd-3394-488d-bed3-674875393f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-wjpsm" Jan 26 00:11:10 crc kubenswrapper[5107]: I0126 00:11:10.254022 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/804eb4cd-3394-488d-bed3-674875393f4e-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-wjpsm\" (UID: \"804eb4cd-3394-488d-bed3-674875393f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-wjpsm" Jan 26 00:11:10 crc kubenswrapper[5107]: I0126 00:11:10.268240 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/804eb4cd-3394-488d-bed3-674875393f4e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-wjpsm\" (UID: \"804eb4cd-3394-488d-bed3-674875393f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-wjpsm" Jan 26 00:11:10 crc kubenswrapper[5107]: I0126 00:11:10.268801 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/804eb4cd-3394-488d-bed3-674875393f4e-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-wjpsm\" (UID: \"804eb4cd-3394-488d-bed3-674875393f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-wjpsm" Jan 26 00:11:10 crc kubenswrapper[5107]: I0126 00:11:10.268842 5107 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Jan 26 00:11:10 crc kubenswrapper[5107]: I0126 00:11:10.323017 5107 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 26 00:11:10 crc kubenswrapper[5107]: I0126 00:11:10.379112 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/804eb4cd-3394-488d-bed3-674875393f4e-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-wjpsm\" (UID: \"804eb4cd-3394-488d-bed3-674875393f4e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-wjpsm" Jan 26 00:11:10 crc kubenswrapper[5107]: I0126 00:11:10.379338 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-wjpsm" Jan 26 00:11:11 crc kubenswrapper[5107]: I0126 00:11:11.113248 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:11 crc kubenswrapper[5107]: I0126 00:11:11.113256 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:11 crc kubenswrapper[5107]: E0126 00:11:11.113444 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:11 crc kubenswrapper[5107]: E0126 00:11:11.113542 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:11 crc kubenswrapper[5107]: I0126 00:11:11.127738 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" event={"ID":"d12cfb26-8718-4def-8f36-c7eaa12bc463","Type":"ContainerStarted","Data":"232f849813a1d424da2e7596712c5dda8da9c73e44d49ee01ec000f2b14132db"} Jan 26 00:11:11 crc kubenswrapper[5107]: I0126 00:11:11.129430 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-wjpsm" event={"ID":"804eb4cd-3394-488d-bed3-674875393f4e","Type":"ContainerStarted","Data":"58cd02f75eb5edcf7f56a4978621c57caa864e095f68c0ef685027fe95651bf9"} Jan 26 00:11:11 crc kubenswrapper[5107]: I0126 00:11:11.129473 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-wjpsm" event={"ID":"804eb4cd-3394-488d-bed3-674875393f4e","Type":"ContainerStarted","Data":"50952bf52fbf187c8a4742b74597d40f00965b76a685fe81c6353b4c9fdc04cf"} Jan 26 00:11:11 crc kubenswrapper[5107]: I0126 00:11:11.157137 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-wjpsm" podStartSLOduration=91.157118046 podStartE2EDuration="1m31.157118046s" podCreationTimestamp="2026-01-26 00:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:11.155493901 +0000 UTC m=+116.073088247" watchObservedRunningTime="2026-01-26 00:11:11.157118046 +0000 UTC m=+116.074712392" Jan 26 00:11:12 crc kubenswrapper[5107]: I0126 00:11:12.114647 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:11:12 crc kubenswrapper[5107]: E0126 00:11:12.115065 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:11:12 crc kubenswrapper[5107]: I0126 00:11:12.115224 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:12 crc kubenswrapper[5107]: E0126 00:11:12.115274 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:13 crc kubenswrapper[5107]: I0126 00:11:13.112873 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:13 crc kubenswrapper[5107]: I0126 00:11:13.112873 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:13 crc kubenswrapper[5107]: E0126 00:11:13.113112 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:13 crc kubenswrapper[5107]: E0126 00:11:13.113253 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:14 crc kubenswrapper[5107]: I0126 00:11:14.145054 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:11:14 crc kubenswrapper[5107]: E0126 00:11:14.145176 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:11:14 crc kubenswrapper[5107]: I0126 00:11:14.145354 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:14 crc kubenswrapper[5107]: E0126 00:11:14.145473 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:14 crc kubenswrapper[5107]: I0126 00:11:14.151821 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" event={"ID":"d12cfb26-8718-4def-8f36-c7eaa12bc463","Type":"ContainerStarted","Data":"ee08ffbc86db13f1cc4efa26fb4361ac81d024c5931eafb0c463eb9adbd02ae4"} Jan 26 00:11:14 crc kubenswrapper[5107]: I0126 00:11:14.153425 5107 generic.go:358] "Generic (PLEG): container finished" podID="65e3191d-a6c4-4983-aa24-9f03af38c82b" containerID="6badc4c1c50a4607faea2d4519ed051f0aabe9c1d624ba69b08ea6f0e9773e2c" exitCode=0 Jan 26 00:11:14 crc kubenswrapper[5107]: I0126 00:11:14.153467 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4vppd" event={"ID":"65e3191d-a6c4-4983-aa24-9f03af38c82b","Type":"ContainerDied","Data":"6badc4c1c50a4607faea2d4519ed051f0aabe9c1d624ba69b08ea6f0e9773e2c"} Jan 26 00:11:15 crc kubenswrapper[5107]: I0126 00:11:15.113051 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:15 crc kubenswrapper[5107]: I0126 00:11:15.113071 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:15 crc kubenswrapper[5107]: E0126 00:11:15.113586 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:15 crc kubenswrapper[5107]: E0126 00:11:15.114030 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:16 crc kubenswrapper[5107]: E0126 00:11:16.081778 5107 kubelet_node_status.go:509] "Node not becoming ready in time after startup" Jan 26 00:11:16 crc kubenswrapper[5107]: I0126 00:11:16.115320 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:16 crc kubenswrapper[5107]: I0126 00:11:16.115390 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:11:16 crc kubenswrapper[5107]: E0126 00:11:16.115574 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:16 crc kubenswrapper[5107]: E0126 00:11:16.115922 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:11:16 crc kubenswrapper[5107]: E0126 00:11:16.800500 5107 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 00:11:17 crc kubenswrapper[5107]: I0126 00:11:17.112554 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:17 crc kubenswrapper[5107]: E0126 00:11:17.112733 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:17 crc kubenswrapper[5107]: I0126 00:11:17.113557 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:17 crc kubenswrapper[5107]: E0126 00:11:17.113793 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:17 crc kubenswrapper[5107]: I0126 00:11:17.350649 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4vppd" event={"ID":"65e3191d-a6c4-4983-aa24-9f03af38c82b","Type":"ContainerStarted","Data":"86995b5724a0082d6ca22de3478938ed26423c6b45e282ff76f91ab94f30432a"} Jan 26 00:11:18 crc kubenswrapper[5107]: I0126 00:11:18.112499 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:11:18 crc kubenswrapper[5107]: I0126 00:11:18.112565 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:18 crc kubenswrapper[5107]: E0126 00:11:18.112841 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:11:18 crc kubenswrapper[5107]: E0126 00:11:18.113015 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:18 crc kubenswrapper[5107]: I0126 00:11:18.355504 5107 generic.go:358] "Generic (PLEG): container finished" podID="65e3191d-a6c4-4983-aa24-9f03af38c82b" containerID="86995b5724a0082d6ca22de3478938ed26423c6b45e282ff76f91ab94f30432a" exitCode=0 Jan 26 00:11:18 crc kubenswrapper[5107]: I0126 00:11:18.355573 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4vppd" event={"ID":"65e3191d-a6c4-4983-aa24-9f03af38c82b","Type":"ContainerDied","Data":"86995b5724a0082d6ca22de3478938ed26423c6b45e282ff76f91ab94f30432a"} Jan 26 00:11:19 crc kubenswrapper[5107]: I0126 00:11:19.112300 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:19 crc kubenswrapper[5107]: E0126 00:11:19.112458 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:19 crc kubenswrapper[5107]: I0126 00:11:19.112631 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:19 crc kubenswrapper[5107]: E0126 00:11:19.112678 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:19 crc kubenswrapper[5107]: I0126 00:11:19.363194 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" event={"ID":"d12cfb26-8718-4def-8f36-c7eaa12bc463","Type":"ContainerStarted","Data":"a9ab2a653b2b73d826c9ddea0b68582c394418fa92ab46bb0c7d4eda8b3812f5"} Jan 26 00:11:19 crc kubenswrapper[5107]: I0126 00:11:19.363830 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:11:19 crc kubenswrapper[5107]: I0126 00:11:19.363875 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:11:19 crc kubenswrapper[5107]: I0126 00:11:19.363979 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:11:19 crc kubenswrapper[5107]: I0126 00:11:19.399662 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" podStartSLOduration=98.399642956 podStartE2EDuration="1m38.399642956s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:19.397047374 +0000 UTC m=+124.314641730" watchObservedRunningTime="2026-01-26 00:11:19.399642956 +0000 UTC m=+124.317237302" Jan 26 00:11:19 crc kubenswrapper[5107]: I0126 00:11:19.447612 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:11:19 crc kubenswrapper[5107]: I0126 00:11:19.488037 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:11:20 crc kubenswrapper[5107]: I0126 00:11:20.112462 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:11:20 crc kubenswrapper[5107]: I0126 00:11:20.112489 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:20 crc kubenswrapper[5107]: E0126 00:11:20.112684 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:11:20 crc kubenswrapper[5107]: E0126 00:11:20.112900 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:20 crc kubenswrapper[5107]: I0126 00:11:20.385875 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4vppd" event={"ID":"65e3191d-a6c4-4983-aa24-9f03af38c82b","Type":"ContainerStarted","Data":"75abf1bd5ba606570e9cf4be3b50bca5baaf2ed20d81a2d76894722d327823df"} Jan 26 00:11:21 crc kubenswrapper[5107]: I0126 00:11:21.182204 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:21 crc kubenswrapper[5107]: E0126 00:11:21.182389 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:21 crc kubenswrapper[5107]: I0126 00:11:21.182654 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:21 crc kubenswrapper[5107]: E0126 00:11:21.182726 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:21 crc kubenswrapper[5107]: I0126 00:11:21.393278 5107 generic.go:358] "Generic (PLEG): container finished" podID="65e3191d-a6c4-4983-aa24-9f03af38c82b" containerID="75abf1bd5ba606570e9cf4be3b50bca5baaf2ed20d81a2d76894722d327823df" exitCode=0 Jan 26 00:11:21 crc kubenswrapper[5107]: I0126 00:11:21.395119 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4vppd" event={"ID":"65e3191d-a6c4-4983-aa24-9f03af38c82b","Type":"ContainerDied","Data":"75abf1bd5ba606570e9cf4be3b50bca5baaf2ed20d81a2d76894722d327823df"} Jan 26 00:11:21 crc kubenswrapper[5107]: E0126 00:11:21.801777 5107 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 00:11:22 crc kubenswrapper[5107]: I0126 00:11:22.230923 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:22 crc kubenswrapper[5107]: E0126 00:11:22.231123 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:22 crc kubenswrapper[5107]: I0126 00:11:22.231366 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:11:22 crc kubenswrapper[5107]: E0126 00:11:22.231461 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:11:22 crc kubenswrapper[5107]: I0126 00:11:22.408642 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4vppd" event={"ID":"65e3191d-a6c4-4983-aa24-9f03af38c82b","Type":"ContainerStarted","Data":"c079709ead5060371c460813ff11fc405cb3ac57a49f784732d608d813c42b0c"} Jan 26 00:11:23 crc kubenswrapper[5107]: I0126 00:11:23.112327 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:23 crc kubenswrapper[5107]: E0126 00:11:23.112905 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:23 crc kubenswrapper[5107]: I0126 00:11:23.112560 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:23 crc kubenswrapper[5107]: E0126 00:11:23.113243 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:24 crc kubenswrapper[5107]: I0126 00:11:24.112577 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:24 crc kubenswrapper[5107]: I0126 00:11:24.112813 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:11:24 crc kubenswrapper[5107]: E0126 00:11:24.113059 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:11:24 crc kubenswrapper[5107]: E0126 00:11:24.113370 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:25 crc kubenswrapper[5107]: I0126 00:11:25.113432 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:25 crc kubenswrapper[5107]: E0126 00:11:25.113676 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:25 crc kubenswrapper[5107]: I0126 00:11:25.119172 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:25 crc kubenswrapper[5107]: E0126 00:11:25.120100 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:26 crc kubenswrapper[5107]: I0126 00:11:26.117555 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:26 crc kubenswrapper[5107]: E0126 00:11:26.117715 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:26 crc kubenswrapper[5107]: I0126 00:11:26.118030 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:11:26 crc kubenswrapper[5107]: E0126 00:11:26.118107 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:11:26 crc kubenswrapper[5107]: I0126 00:11:26.479413 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bdn4m"] Jan 26 00:11:26 crc kubenswrapper[5107]: I0126 00:11:26.480006 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:11:26 crc kubenswrapper[5107]: E0126 00:11:26.480232 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:11:26 crc kubenswrapper[5107]: E0126 00:11:26.803455 5107 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 00:11:27 crc kubenswrapper[5107]: I0126 00:11:27.112854 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:27 crc kubenswrapper[5107]: I0126 00:11:27.113084 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:27 crc kubenswrapper[5107]: E0126 00:11:27.113453 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:27 crc kubenswrapper[5107]: E0126 00:11:27.113553 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:28 crc kubenswrapper[5107]: I0126 00:11:28.114101 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:11:28 crc kubenswrapper[5107]: I0126 00:11:28.114122 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:28 crc kubenswrapper[5107]: E0126 00:11:28.114265 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:11:28 crc kubenswrapper[5107]: E0126 00:11:28.114390 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:29 crc kubenswrapper[5107]: I0126 00:11:29.112461 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:29 crc kubenswrapper[5107]: E0126 00:11:29.112663 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:29 crc kubenswrapper[5107]: I0126 00:11:29.113027 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:29 crc kubenswrapper[5107]: E0126 00:11:29.113095 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:29 crc kubenswrapper[5107]: I0126 00:11:29.537754 5107 generic.go:358] "Generic (PLEG): container finished" podID="65e3191d-a6c4-4983-aa24-9f03af38c82b" containerID="c079709ead5060371c460813ff11fc405cb3ac57a49f784732d608d813c42b0c" exitCode=0 Jan 26 00:11:29 crc kubenswrapper[5107]: I0126 00:11:29.537831 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4vppd" event={"ID":"65e3191d-a6c4-4983-aa24-9f03af38c82b","Type":"ContainerDied","Data":"c079709ead5060371c460813ff11fc405cb3ac57a49f784732d608d813c42b0c"} Jan 26 00:11:30 crc kubenswrapper[5107]: I0126 00:11:30.113056 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:30 crc kubenswrapper[5107]: E0126 00:11:30.113218 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:30 crc kubenswrapper[5107]: I0126 00:11:30.113392 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:11:30 crc kubenswrapper[5107]: E0126 00:11:30.113639 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:11:30 crc kubenswrapper[5107]: I0126 00:11:30.550409 5107 generic.go:358] "Generic (PLEG): container finished" podID="65e3191d-a6c4-4983-aa24-9f03af38c82b" containerID="dcd502c12b88b80b442d0fd1e15cf72110466facb7d6c259e073df40fcaa055e" exitCode=0 Jan 26 00:11:30 crc kubenswrapper[5107]: I0126 00:11:30.550476 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4vppd" event={"ID":"65e3191d-a6c4-4983-aa24-9f03af38c82b","Type":"ContainerDied","Data":"dcd502c12b88b80b442d0fd1e15cf72110466facb7d6c259e073df40fcaa055e"} Jan 26 00:11:31 crc kubenswrapper[5107]: I0126 00:11:31.112533 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:31 crc kubenswrapper[5107]: I0126 00:11:31.112637 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:31 crc kubenswrapper[5107]: E0126 00:11:31.112980 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:31 crc kubenswrapper[5107]: E0126 00:11:31.113101 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:31 crc kubenswrapper[5107]: I0126 00:11:31.566397 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4vppd" event={"ID":"65e3191d-a6c4-4983-aa24-9f03af38c82b","Type":"ContainerStarted","Data":"ef5bfb0d49a6c31a857d11b7a7aeef76b62c5be17e5d7bcca7b2f073159c5f1c"} Jan 26 00:11:31 crc kubenswrapper[5107]: I0126 00:11:31.594928 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-4vppd" podStartSLOduration=110.594909701 podStartE2EDuration="1m50.594909701s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:31.593069469 +0000 UTC m=+136.510663835" watchObservedRunningTime="2026-01-26 00:11:31.594909701 +0000 UTC m=+136.512504047" Jan 26 00:11:31 crc kubenswrapper[5107]: E0126 00:11:31.817158 5107 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 00:11:32 crc kubenswrapper[5107]: I0126 00:11:32.112401 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:32 crc kubenswrapper[5107]: E0126 00:11:32.112560 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:32 crc kubenswrapper[5107]: I0126 00:11:32.112870 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:11:32 crc kubenswrapper[5107]: E0126 00:11:32.112980 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:11:33 crc kubenswrapper[5107]: I0126 00:11:33.112330 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:33 crc kubenswrapper[5107]: E0126 00:11:33.112570 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:33 crc kubenswrapper[5107]: I0126 00:11:33.112850 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:33 crc kubenswrapper[5107]: E0126 00:11:33.113075 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:33 crc kubenswrapper[5107]: I0126 00:11:33.163152 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:33 crc kubenswrapper[5107]: I0126 00:11:33.163246 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:33 crc kubenswrapper[5107]: E0126 00:11:33.163343 5107 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:11:33 crc kubenswrapper[5107]: E0126 00:11:33.163480 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:12:37.163452778 +0000 UTC m=+202.081047124 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:11:33 crc kubenswrapper[5107]: E0126 00:11:33.163582 5107 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:11:33 crc kubenswrapper[5107]: E0126 00:11:33.163738 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:12:37.163711045 +0000 UTC m=+202.081305391 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:11:33 crc kubenswrapper[5107]: I0126 00:11:33.264848 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:33 crc kubenswrapper[5107]: I0126 00:11:33.265062 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:33 crc kubenswrapper[5107]: E0126 00:11:33.265160 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:11:33 crc kubenswrapper[5107]: E0126 00:11:33.265232 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:11:33 crc kubenswrapper[5107]: E0126 00:11:33.265252 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:11:33 crc kubenswrapper[5107]: E0126 00:11:33.265269 5107 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:11:33 crc kubenswrapper[5107]: E0126 00:11:33.265276 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:11:33 crc kubenswrapper[5107]: E0126 00:11:33.265290 5107 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:11:33 crc kubenswrapper[5107]: E0126 00:11:33.265394 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-26 00:12:37.265344621 +0000 UTC m=+202.182938967 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:11:33 crc kubenswrapper[5107]: E0126 00:11:33.265419 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-26 00:12:37.265412163 +0000 UTC m=+202.183006509 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:11:34 crc kubenswrapper[5107]: I0126 00:11:34.112997 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:11:34 crc kubenswrapper[5107]: I0126 00:11:34.113133 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:34 crc kubenswrapper[5107]: E0126 00:11:34.113837 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:11:34 crc kubenswrapper[5107]: E0126 00:11:34.114042 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:34 crc kubenswrapper[5107]: I0126 00:11:34.177611 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-metrics-certs\") pod \"network-metrics-daemon-bdn4m\" (UID: \"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\") " pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:11:34 crc kubenswrapper[5107]: E0126 00:11:34.177832 5107 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:11:34 crc kubenswrapper[5107]: E0126 00:11:34.177932 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-metrics-certs podName:93b5402e-3f3e-4e3b-8cf4-f919871d0c86 nodeName:}" failed. No retries permitted until 2026-01-26 00:12:38.177914464 +0000 UTC m=+203.095508810 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-metrics-certs") pod "network-metrics-daemon-bdn4m" (UID: "93b5402e-3f3e-4e3b-8cf4-f919871d0c86") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:11:34 crc kubenswrapper[5107]: I0126 00:11:34.380578 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:34 crc kubenswrapper[5107]: E0126 00:11:34.380818 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:38.380796115 +0000 UTC m=+203.298390471 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:35 crc kubenswrapper[5107]: I0126 00:11:35.112619 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:35 crc kubenswrapper[5107]: I0126 00:11:35.112628 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:35 crc kubenswrapper[5107]: E0126 00:11:35.112821 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:35 crc kubenswrapper[5107]: E0126 00:11:35.112956 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:36 crc kubenswrapper[5107]: I0126 00:11:36.114541 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:36 crc kubenswrapper[5107]: E0126 00:11:36.114665 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:36 crc kubenswrapper[5107]: I0126 00:11:36.114970 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:11:36 crc kubenswrapper[5107]: E0126 00:11:36.115386 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bdn4m" podUID="93b5402e-3f3e-4e3b-8cf4-f919871d0c86" Jan 26 00:11:37 crc kubenswrapper[5107]: I0126 00:11:37.112863 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:37 crc kubenswrapper[5107]: I0126 00:11:37.113024 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:37 crc kubenswrapper[5107]: I0126 00:11:37.115377 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 26 00:11:37 crc kubenswrapper[5107]: I0126 00:11:37.115533 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 26 00:11:38 crc kubenswrapper[5107]: I0126 00:11:38.112182 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:11:38 crc kubenswrapper[5107]: I0126 00:11:38.113244 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:38 crc kubenswrapper[5107]: I0126 00:11:38.115136 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 26 00:11:38 crc kubenswrapper[5107]: I0126 00:11:38.115177 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 26 00:11:38 crc kubenswrapper[5107]: I0126 00:11:38.115458 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 26 00:11:38 crc kubenswrapper[5107]: I0126 00:11:38.118451 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 26 00:11:40 crc kubenswrapper[5107]: I0126 00:11:40.088312 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Jan 26 00:11:40 crc kubenswrapper[5107]: I0126 00:11:40.124270 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.048950 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-4gmk9"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.049146 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.052407 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.052485 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.052551 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.052593 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.053218 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.053277 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.053354 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.054256 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-lpd5s"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.054326 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-4gmk9" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.056139 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.056691 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.057277 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-6wxtb"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.057440 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-lpd5s" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.058071 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.058288 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.058541 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.058686 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.058908 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.059361 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.059671 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.059769 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.059816 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.059869 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.063524 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.063698 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.064043 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.064361 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.073098 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.076686 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29489760-jn9bq"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.076726 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-6wxtb" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.078686 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.078851 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.079382 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.079437 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.079517 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.079631 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.140203 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29489760-jn9bq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.143324 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.144272 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.146444 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-mjn4v"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.162807 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-wsw2x"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.163025 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-mjn4v" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.165271 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-zmswq"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.165363 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.169367 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.169792 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.170430 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.170686 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.170755 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.170951 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.174843 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.175143 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-zmswq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.175476 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.175473 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.175516 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f147b0c8-28b8-4818-a30c-f6aa0da709db-images\") pod \"machine-api-operator-755bb95488-lpd5s\" (UID: \"f147b0c8-28b8-4818-a30c-f6aa0da709db\") " pod="openshift-machine-api/machine-api-operator-755bb95488-lpd5s" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.175545 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/add7b84d-7f90-4850-9568-c7f3755404ca-audit-dir\") pod \"apiserver-8596bd845d-kq9jq\" (UID: \"add7b84d-7f90-4850-9568-c7f3755404ca\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.175560 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f147b0c8-28b8-4818-a30c-f6aa0da709db-config\") pod \"machine-api-operator-755bb95488-lpd5s\" (UID: \"f147b0c8-28b8-4818-a30c-f6aa0da709db\") " pod="openshift-machine-api/machine-api-operator-755bb95488-lpd5s" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.175592 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/add7b84d-7f90-4850-9568-c7f3755404ca-trusted-ca-bundle\") pod \"apiserver-8596bd845d-kq9jq\" (UID: \"add7b84d-7f90-4850-9568-c7f3755404ca\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.175724 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f-console-serving-cert\") pod \"console-64d44f6ddf-4gmk9\" (UID: \"1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f\") " pod="openshift-console/console-64d44f6ddf-4gmk9" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.175755 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a0a54b9-6415-4f43-97d3-1b95793389ec-auth-proxy-config\") pod \"machine-approver-54c688565-6wxtb\" (UID: \"1a0a54b9-6415-4f43-97d3-1b95793389ec\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6wxtb" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.175791 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/add7b84d-7f90-4850-9568-c7f3755404ca-audit-policies\") pod \"apiserver-8596bd845d-kq9jq\" (UID: \"add7b84d-7f90-4850-9568-c7f3755404ca\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.175815 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98r8x\" (UniqueName: \"kubernetes.io/projected/1a0a54b9-6415-4f43-97d3-1b95793389ec-kube-api-access-98r8x\") pod \"machine-approver-54c688565-6wxtb\" (UID: \"1a0a54b9-6415-4f43-97d3-1b95793389ec\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6wxtb" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.175912 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/add7b84d-7f90-4850-9568-c7f3755404ca-encryption-config\") pod \"apiserver-8596bd845d-kq9jq\" (UID: \"add7b84d-7f90-4850-9568-c7f3755404ca\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.175956 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f-oauth-serving-cert\") pod \"console-64d44f6ddf-4gmk9\" (UID: \"1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f\") " pod="openshift-console/console-64d44f6ddf-4gmk9" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.175988 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f-trusted-ca-bundle\") pod \"console-64d44f6ddf-4gmk9\" (UID: \"1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f\") " pod="openshift-console/console-64d44f6ddf-4gmk9" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.176018 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f-service-ca\") pod \"console-64d44f6ddf-4gmk9\" (UID: \"1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f\") " pod="openshift-console/console-64d44f6ddf-4gmk9" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.176054 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/add7b84d-7f90-4850-9568-c7f3755404ca-serving-cert\") pod \"apiserver-8596bd845d-kq9jq\" (UID: \"add7b84d-7f90-4850-9568-c7f3755404ca\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.176086 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f147b0c8-28b8-4818-a30c-f6aa0da709db-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-lpd5s\" (UID: \"f147b0c8-28b8-4818-a30c-f6aa0da709db\") " pod="openshift-machine-api/machine-api-operator-755bb95488-lpd5s" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.176159 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f-console-oauth-config\") pod \"console-64d44f6ddf-4gmk9\" (UID: \"1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f\") " pod="openshift-console/console-64d44f6ddf-4gmk9" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.176186 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvkwc\" (UniqueName: \"kubernetes.io/projected/1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f-kube-api-access-gvkwc\") pod \"console-64d44f6ddf-4gmk9\" (UID: \"1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f\") " pod="openshift-console/console-64d44f6ddf-4gmk9" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.176210 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/add7b84d-7f90-4850-9568-c7f3755404ca-etcd-client\") pod \"apiserver-8596bd845d-kq9jq\" (UID: \"add7b84d-7f90-4850-9568-c7f3755404ca\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.176254 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a0a54b9-6415-4f43-97d3-1b95793389ec-config\") pod \"machine-approver-54c688565-6wxtb\" (UID: \"1a0a54b9-6415-4f43-97d3-1b95793389ec\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6wxtb" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.176270 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/add7b84d-7f90-4850-9568-c7f3755404ca-etcd-serving-ca\") pod \"apiserver-8596bd845d-kq9jq\" (UID: \"add7b84d-7f90-4850-9568-c7f3755404ca\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.176284 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1a0a54b9-6415-4f43-97d3-1b95793389ec-machine-approver-tls\") pod \"machine-approver-54c688565-6wxtb\" (UID: \"1a0a54b9-6415-4f43-97d3-1b95793389ec\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6wxtb" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.176307 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f-console-config\") pod \"console-64d44f6ddf-4gmk9\" (UID: \"1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f\") " pod="openshift-console/console-64d44f6ddf-4gmk9" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.176376 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4g9h\" (UniqueName: \"kubernetes.io/projected/f147b0c8-28b8-4818-a30c-f6aa0da709db-kube-api-access-g4g9h\") pod \"machine-api-operator-755bb95488-lpd5s\" (UID: \"f147b0c8-28b8-4818-a30c-f6aa0da709db\") " pod="openshift-machine-api/machine-api-operator-755bb95488-lpd5s" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.176409 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhbp7\" (UniqueName: \"kubernetes.io/projected/add7b84d-7f90-4850-9568-c7f3755404ca-kube-api-access-zhbp7\") pod \"apiserver-8596bd845d-kq9jq\" (UID: \"add7b84d-7f90-4850-9568-c7f3755404ca\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.176487 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.177625 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.178272 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.178337 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.178948 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.178876 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.181206 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.181510 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.181555 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.181720 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.182016 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-flbvs"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.182157 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.182855 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.183025 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.184533 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.184876 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.191678 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-64rgr"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.191834 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.192814 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.193178 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.194415 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.194809 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.194825 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.195044 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wwqgc"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.195280 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.195315 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-64rgr" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.195355 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.198061 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.198118 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.198279 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.198351 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.198418 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.198708 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-n2dtl"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.198928 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.198988 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.199043 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wwqgc" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.199447 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.199523 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.202257 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-96jl7"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.203202 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.203393 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.205965 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.206728 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.206746 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.207245 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.207325 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.207523 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.210590 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.213258 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.226683 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-7mzzj"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.226806 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-n2dtl" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.226935 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96jl7" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.228171 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.228510 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.229276 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.229326 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.229357 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.229282 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.230078 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.238132 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.238360 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-gg5st"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.238626 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-7mzzj" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.242497 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2ldq5"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.242681 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-gg5st" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.243408 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.247412 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.249443 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.249636 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.251339 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.263619 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rhc6b"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.263854 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2ldq5" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.267208 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2cdrs"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.267358 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rhc6b" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.272345 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f9vts"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.272503 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2cdrs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.273707 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.276166 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gjkxw"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.276241 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f9vts" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.276868 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/add7b84d-7f90-4850-9568-c7f3755404ca-etcd-serving-ca\") pod \"apiserver-8596bd845d-kq9jq\" (UID: \"add7b84d-7f90-4850-9568-c7f3755404ca\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.276916 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1a0a54b9-6415-4f43-97d3-1b95793389ec-machine-approver-tls\") pod \"machine-approver-54c688565-6wxtb\" (UID: \"1a0a54b9-6415-4f43-97d3-1b95793389ec\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6wxtb" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.276940 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b18dee05-6423-4857-95c5-63d2a976e19f-audit-dir\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.276960 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b44ede31-5627-4422-b319-14db754817f4-config\") pod \"console-operator-67c89758df-mjn4v\" (UID: \"b44ede31-5627-4422-b319-14db754817f4\") " pod="openshift-console-operator/console-operator-67c89758df-mjn4v" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.277082 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f-console-config\") pod \"console-64d44f6ddf-4gmk9\" (UID: \"1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f\") " pod="openshift-console/console-64d44f6ddf-4gmk9" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.277108 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-config\") pod \"route-controller-manager-776cdc94d6-6sr6w\" (UID: \"1dcc8c3a-74e3-404d-8f0f-cec0001cf476\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.277124 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqps7\" (UniqueName: \"kubernetes.io/projected/cecc62a2-1a5f-4b0f-95bf-459d1493d1df-kube-api-access-bqps7\") pod \"openshift-config-operator-5777786469-zmswq\" (UID: \"cecc62a2-1a5f-4b0f-95bf-459d1493d1df\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zmswq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.277151 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g4g9h\" (UniqueName: \"kubernetes.io/projected/f147b0c8-28b8-4818-a30c-f6aa0da709db-kube-api-access-g4g9h\") pod \"machine-api-operator-755bb95488-lpd5s\" (UID: \"f147b0c8-28b8-4818-a30c-f6aa0da709db\") " pod="openshift-machine-api/machine-api-operator-755bb95488-lpd5s" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.277191 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.277299 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/42d6fb86-e6fd-4b77-b921-d62cd5b6e825-serviceca\") pod \"image-pruner-29489760-jn9bq\" (UID: \"42d6fb86-e6fd-4b77-b921-d62cd5b6e825\") " pod="openshift-image-registry/image-pruner-29489760-jn9bq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.277440 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b18dee05-6423-4857-95c5-63d2a976e19f-config\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.277491 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zhbp7\" (UniqueName: \"kubernetes.io/projected/add7b84d-7f90-4850-9568-c7f3755404ca-kube-api-access-zhbp7\") pod \"apiserver-8596bd845d-kq9jq\" (UID: \"add7b84d-7f90-4850-9568-c7f3755404ca\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.277518 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.277541 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f147b0c8-28b8-4818-a30c-f6aa0da709db-images\") pod \"machine-api-operator-755bb95488-lpd5s\" (UID: \"f147b0c8-28b8-4818-a30c-f6aa0da709db\") " pod="openshift-machine-api/machine-api-operator-755bb95488-lpd5s" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.277626 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cecc62a2-1a5f-4b0f-95bf-459d1493d1df-serving-cert\") pod \"openshift-config-operator-5777786469-zmswq\" (UID: \"cecc62a2-1a5f-4b0f-95bf-459d1493d1df\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zmswq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.277691 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b18dee05-6423-4857-95c5-63d2a976e19f-node-pullsecrets\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.277713 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/936adeed-5876-49da-b102-8187f5bc998a-config\") pod \"authentication-operator-7f5c659b84-n2dtl\" (UID: \"936adeed-5876-49da-b102-8187f5bc998a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-n2dtl" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.277734 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c1eb51c7-ee2f-4230-929d-62d6608eca89-audit-policies\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.277787 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/add7b84d-7f90-4850-9568-c7f3755404ca-audit-dir\") pod \"apiserver-8596bd845d-kq9jq\" (UID: \"add7b84d-7f90-4850-9568-c7f3755404ca\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.277807 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.277827 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f147b0c8-28b8-4818-a30c-f6aa0da709db-config\") pod \"machine-api-operator-755bb95488-lpd5s\" (UID: \"f147b0c8-28b8-4818-a30c-f6aa0da709db\") " pod="openshift-machine-api/machine-api-operator-755bb95488-lpd5s" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.277839 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/add7b84d-7f90-4850-9568-c7f3755404ca-audit-dir\") pod \"apiserver-8596bd845d-kq9jq\" (UID: \"add7b84d-7f90-4850-9568-c7f3755404ca\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.278218 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.278248 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/add7b84d-7f90-4850-9568-c7f3755404ca-trusted-ca-bundle\") pod \"apiserver-8596bd845d-kq9jq\" (UID: \"add7b84d-7f90-4850-9568-c7f3755404ca\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.278300 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zdjl\" (UniqueName: \"kubernetes.io/projected/c1eb51c7-ee2f-4230-929d-62d6608eca89-kube-api-access-5zdjl\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.278322 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-client-ca\") pod \"route-controller-manager-776cdc94d6-6sr6w\" (UID: \"1dcc8c3a-74e3-404d-8f0f-cec0001cf476\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.278394 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f-console-serving-cert\") pod \"console-64d44f6ddf-4gmk9\" (UID: \"1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f\") " pod="openshift-console/console-64d44f6ddf-4gmk9" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.278413 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a0a54b9-6415-4f43-97d3-1b95793389ec-auth-proxy-config\") pod \"machine-approver-54c688565-6wxtb\" (UID: \"1a0a54b9-6415-4f43-97d3-1b95793389ec\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6wxtb" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.278467 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b18dee05-6423-4857-95c5-63d2a976e19f-etcd-client\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.278508 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/936adeed-5876-49da-b102-8187f5bc998a-serving-cert\") pod \"authentication-operator-7f5c659b84-n2dtl\" (UID: \"936adeed-5876-49da-b102-8187f5bc998a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-n2dtl" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.278555 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f147b0c8-28b8-4818-a30c-f6aa0da709db-images\") pod \"machine-api-operator-755bb95488-lpd5s\" (UID: \"f147b0c8-28b8-4818-a30c-f6aa0da709db\") " pod="openshift-machine-api/machine-api-operator-755bb95488-lpd5s" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.278580 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b18dee05-6423-4857-95c5-63d2a976e19f-serving-cert\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.278637 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.278677 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.278726 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.278754 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/add7b84d-7f90-4850-9568-c7f3755404ca-audit-policies\") pod \"apiserver-8596bd845d-kq9jq\" (UID: \"add7b84d-7f90-4850-9568-c7f3755404ca\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.278802 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-98r8x\" (UniqueName: \"kubernetes.io/projected/1a0a54b9-6415-4f43-97d3-1b95793389ec-kube-api-access-98r8x\") pod \"machine-approver-54c688565-6wxtb\" (UID: \"1a0a54b9-6415-4f43-97d3-1b95793389ec\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6wxtb" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279007 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b18dee05-6423-4857-95c5-63d2a976e19f-image-import-ca\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279032 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/add7b84d-7f90-4850-9568-c7f3755404ca-trusted-ca-bundle\") pod \"apiserver-8596bd845d-kq9jq\" (UID: \"add7b84d-7f90-4850-9568-c7f3755404ca\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279049 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ba61487-45ca-44b7-aaed-0faa630aaa88-config\") pod \"openshift-apiserver-operator-846cbfc458-wwqgc\" (UID: \"6ba61487-45ca-44b7-aaed-0faa630aaa88\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wwqgc" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279123 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-serving-cert\") pod \"route-controller-manager-776cdc94d6-6sr6w\" (UID: \"1dcc8c3a-74e3-404d-8f0f-cec0001cf476\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279185 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/add7b84d-7f90-4850-9568-c7f3755404ca-encryption-config\") pod \"apiserver-8596bd845d-kq9jq\" (UID: \"add7b84d-7f90-4850-9568-c7f3755404ca\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279200 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f147b0c8-28b8-4818-a30c-f6aa0da709db-config\") pod \"machine-api-operator-755bb95488-lpd5s\" (UID: \"f147b0c8-28b8-4818-a30c-f6aa0da709db\") " pod="openshift-machine-api/machine-api-operator-755bb95488-lpd5s" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279215 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b18dee05-6423-4857-95c5-63d2a976e19f-audit\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279239 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb5hj\" (UniqueName: \"kubernetes.io/projected/b44ede31-5627-4422-b319-14db754817f4-kube-api-access-pb5hj\") pod \"console-operator-67c89758df-mjn4v\" (UID: \"b44ede31-5627-4422-b319-14db754817f4\") " pod="openshift-console-operator/console-operator-67c89758df-mjn4v" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279311 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f-oauth-serving-cert\") pod \"console-64d44f6ddf-4gmk9\" (UID: \"1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f\") " pod="openshift-console/console-64d44f6ddf-4gmk9" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279336 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f-trusted-ca-bundle\") pod \"console-64d44f6ddf-4gmk9\" (UID: \"1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f\") " pod="openshift-console/console-64d44f6ddf-4gmk9" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279358 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz2dh\" (UniqueName: \"kubernetes.io/projected/4498876a-5953-499f-aa71-6899b8529dcf-kube-api-access-zz2dh\") pod \"downloads-747b44746d-64rgr\" (UID: \"4498876a-5953-499f-aa71-6899b8529dcf\") " pod="openshift-console/downloads-747b44746d-64rgr" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279384 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279407 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279429 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvlxh\" (UniqueName: \"kubernetes.io/projected/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-kube-api-access-bvlxh\") pod \"route-controller-manager-776cdc94d6-6sr6w\" (UID: \"1dcc8c3a-74e3-404d-8f0f-cec0001cf476\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279454 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f-service-ca\") pod \"console-64d44f6ddf-4gmk9\" (UID: \"1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f\") " pod="openshift-console/console-64d44f6ddf-4gmk9" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279457 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/add7b84d-7f90-4850-9568-c7f3755404ca-audit-policies\") pod \"apiserver-8596bd845d-kq9jq\" (UID: \"add7b84d-7f90-4850-9568-c7f3755404ca\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279477 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/936adeed-5876-49da-b102-8187f5bc998a-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-n2dtl\" (UID: \"936adeed-5876-49da-b102-8187f5bc998a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-n2dtl" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279508 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/add7b84d-7f90-4850-9568-c7f3755404ca-serving-cert\") pod \"apiserver-8596bd845d-kq9jq\" (UID: \"add7b84d-7f90-4850-9568-c7f3755404ca\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279537 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f147b0c8-28b8-4818-a30c-f6aa0da709db-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-lpd5s\" (UID: \"f147b0c8-28b8-4818-a30c-f6aa0da709db\") " pod="openshift-machine-api/machine-api-operator-755bb95488-lpd5s" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279559 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b18dee05-6423-4857-95c5-63d2a976e19f-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279583 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b18dee05-6423-4857-95c5-63d2a976e19f-encryption-config\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279594 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-97496"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279608 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hgwf\" (UniqueName: \"kubernetes.io/projected/b18dee05-6423-4857-95c5-63d2a976e19f-kube-api-access-7hgwf\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279635 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b18dee05-6423-4857-95c5-63d2a976e19f-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279658 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ba61487-45ca-44b7-aaed-0faa630aaa88-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-wwqgc\" (UID: \"6ba61487-45ca-44b7-aaed-0faa630aaa88\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wwqgc" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279685 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279711 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26dbv\" (UniqueName: \"kubernetes.io/projected/42d6fb86-e6fd-4b77-b921-d62cd5b6e825-kube-api-access-26dbv\") pod \"image-pruner-29489760-jn9bq\" (UID: \"42d6fb86-e6fd-4b77-b921-d62cd5b6e825\") " pod="openshift-image-registry/image-pruner-29489760-jn9bq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279731 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-tmp\") pod \"route-controller-manager-776cdc94d6-6sr6w\" (UID: \"1dcc8c3a-74e3-404d-8f0f-cec0001cf476\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279755 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b44ede31-5627-4422-b319-14db754817f4-serving-cert\") pod \"console-operator-67c89758df-mjn4v\" (UID: \"b44ede31-5627-4422-b319-14db754817f4\") " pod="openshift-console-operator/console-operator-67c89758df-mjn4v" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279796 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c1eb51c7-ee2f-4230-929d-62d6608eca89-audit-dir\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279829 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f-console-oauth-config\") pod \"console-64d44f6ddf-4gmk9\" (UID: \"1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f\") " pod="openshift-console/console-64d44f6ddf-4gmk9" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279857 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gvkwc\" (UniqueName: \"kubernetes.io/projected/1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f-kube-api-access-gvkwc\") pod \"console-64d44f6ddf-4gmk9\" (UID: \"1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f\") " pod="openshift-console/console-64d44f6ddf-4gmk9" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279896 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/936adeed-5876-49da-b102-8187f5bc998a-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-n2dtl\" (UID: \"936adeed-5876-49da-b102-8187f5bc998a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-n2dtl" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279929 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgmm2\" (UniqueName: \"kubernetes.io/projected/6ba61487-45ca-44b7-aaed-0faa630aaa88-kube-api-access-mgmm2\") pod \"openshift-apiserver-operator-846cbfc458-wwqgc\" (UID: \"6ba61487-45ca-44b7-aaed-0faa630aaa88\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wwqgc" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279951 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b44ede31-5627-4422-b319-14db754817f4-trusted-ca\") pod \"console-operator-67c89758df-mjn4v\" (UID: \"b44ede31-5627-4422-b319-14db754817f4\") " pod="openshift-console-operator/console-operator-67c89758df-mjn4v" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.279978 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cecc62a2-1a5f-4b0f-95bf-459d1493d1df-available-featuregates\") pod \"openshift-config-operator-5777786469-zmswq\" (UID: \"cecc62a2-1a5f-4b0f-95bf-459d1493d1df\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zmswq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.280005 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/add7b84d-7f90-4850-9568-c7f3755404ca-etcd-client\") pod \"apiserver-8596bd845d-kq9jq\" (UID: \"add7b84d-7f90-4850-9568-c7f3755404ca\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.280045 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a0a54b9-6415-4f43-97d3-1b95793389ec-config\") pod \"machine-approver-54c688565-6wxtb\" (UID: \"1a0a54b9-6415-4f43-97d3-1b95793389ec\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6wxtb" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.280067 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkg29\" (UniqueName: \"kubernetes.io/projected/936adeed-5876-49da-b102-8187f5bc998a-kube-api-access-qkg29\") pod \"authentication-operator-7f5c659b84-n2dtl\" (UID: \"936adeed-5876-49da-b102-8187f5bc998a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-n2dtl" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.280092 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.280136 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f-service-ca\") pod \"console-64d44f6ddf-4gmk9\" (UID: \"1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f\") " pod="openshift-console/console-64d44f6ddf-4gmk9" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.280572 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gjkxw" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.280582 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f-oauth-serving-cert\") pod \"console-64d44f6ddf-4gmk9\" (UID: \"1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f\") " pod="openshift-console/console-64d44f6ddf-4gmk9" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.281088 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/add7b84d-7f90-4850-9568-c7f3755404ca-etcd-serving-ca\") pod \"apiserver-8596bd845d-kq9jq\" (UID: \"add7b84d-7f90-4850-9568-c7f3755404ca\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.282008 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f-console-config\") pod \"console-64d44f6ddf-4gmk9\" (UID: \"1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f\") " pod="openshift-console/console-64d44f6ddf-4gmk9" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.282937 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f-trusted-ca-bundle\") pod \"console-64d44f6ddf-4gmk9\" (UID: \"1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f\") " pod="openshift-console/console-64d44f6ddf-4gmk9" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.283069 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a0a54b9-6415-4f43-97d3-1b95793389ec-auth-proxy-config\") pod \"machine-approver-54c688565-6wxtb\" (UID: \"1a0a54b9-6415-4f43-97d3-1b95793389ec\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6wxtb" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.284908 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/add7b84d-7f90-4850-9568-c7f3755404ca-serving-cert\") pod \"apiserver-8596bd845d-kq9jq\" (UID: \"add7b84d-7f90-4850-9568-c7f3755404ca\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.284955 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/add7b84d-7f90-4850-9568-c7f3755404ca-encryption-config\") pod \"apiserver-8596bd845d-kq9jq\" (UID: \"add7b84d-7f90-4850-9568-c7f3755404ca\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.284985 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f-console-oauth-config\") pod \"console-64d44f6ddf-4gmk9\" (UID: \"1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f\") " pod="openshift-console/console-64d44f6ddf-4gmk9" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.285380 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f147b0c8-28b8-4818-a30c-f6aa0da709db-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-lpd5s\" (UID: \"f147b0c8-28b8-4818-a30c-f6aa0da709db\") " pod="openshift-machine-api/machine-api-operator-755bb95488-lpd5s" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.286384 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f-console-serving-cert\") pod \"console-64d44f6ddf-4gmk9\" (UID: \"1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f\") " pod="openshift-console/console-64d44f6ddf-4gmk9" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.286478 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/add7b84d-7f90-4850-9568-c7f3755404ca-etcd-client\") pod \"apiserver-8596bd845d-kq9jq\" (UID: \"add7b84d-7f90-4850-9568-c7f3755404ca\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.291739 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.295381 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1a0a54b9-6415-4f43-97d3-1b95793389ec-machine-approver-tls\") pod \"machine-approver-54c688565-6wxtb\" (UID: \"1a0a54b9-6415-4f43-97d3-1b95793389ec\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6wxtb" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.296903 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a0a54b9-6415-4f43-97d3-1b95793389ec-config\") pod \"machine-approver-54c688565-6wxtb\" (UID: \"1a0a54b9-6415-4f43-97d3-1b95793389ec\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6wxtb" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.311704 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.314044 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-59jn5"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.314190 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-97496" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.319716 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pwh7s"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.319826 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.322527 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jxbv4"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.322713 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pwh7s" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.330173 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-mbr9b"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.330336 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jxbv4" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.332634 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.334818 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489760-g5ptf"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.334973 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.352597 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.361334 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-5hcgj"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.361676 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-g5ptf" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.364545 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-xqx9c"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.364659 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.374773 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-bf6bf"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.375253 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.381706 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be2bed85-ec40-4cd3-bf51-8e7ed0111e6f-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-96jl7\" (UID: \"be2bed85-ec40-4cd3-bf51-8e7ed0111e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96jl7" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.381754 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0470d1dc-849c-40d7-9a25-efb425c4e111-serving-cert\") pod \"etcd-operator-69b85846b6-7mzzj\" (UID: \"0470d1dc-849c-40d7-9a25-efb425c4e111\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7mzzj" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.381798 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qzl7\" (UniqueName: \"kubernetes.io/projected/d93df320-4284-49f0-b63d-ba8a86943f2e-kube-api-access-7qzl7\") pod \"marketplace-operator-547dbd544d-59jn5\" (UID: \"d93df320-4284-49f0-b63d-ba8a86943f2e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.381825 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-config\") pod \"route-controller-manager-776cdc94d6-6sr6w\" (UID: \"1dcc8c3a-74e3-404d-8f0f-cec0001cf476\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.381843 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bqps7\" (UniqueName: \"kubernetes.io/projected/cecc62a2-1a5f-4b0f-95bf-459d1493d1df-kube-api-access-bqps7\") pod \"openshift-config-operator-5777786469-zmswq\" (UID: \"cecc62a2-1a5f-4b0f-95bf-459d1493d1df\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zmswq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.381915 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.381938 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/42d6fb86-e6fd-4b77-b921-d62cd5b6e825-serviceca\") pod \"image-pruner-29489760-jn9bq\" (UID: \"42d6fb86-e6fd-4b77-b921-d62cd5b6e825\") " pod="openshift-image-registry/image-pruner-29489760-jn9bq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.381957 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0470d1dc-849c-40d7-9a25-efb425c4e111-config\") pod \"etcd-operator-69b85846b6-7mzzj\" (UID: \"0470d1dc-849c-40d7-9a25-efb425c4e111\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7mzzj" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.381998 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/926c0a09-eb65-428f-9fd5-9c7c6c80799d-tmpfs\") pod \"olm-operator-5cdf44d969-gg5st\" (UID: \"926c0a09-eb65-428f-9fd5-9c7c6c80799d\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-gg5st" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382020 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b18dee05-6423-4857-95c5-63d2a976e19f-config\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382061 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382082 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d93df320-4284-49f0-b63d-ba8a86943f2e-tmp\") pod \"marketplace-operator-547dbd544d-59jn5\" (UID: \"d93df320-4284-49f0-b63d-ba8a86943f2e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382102 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cecc62a2-1a5f-4b0f-95bf-459d1493d1df-serving-cert\") pod \"openshift-config-operator-5777786469-zmswq\" (UID: \"cecc62a2-1a5f-4b0f-95bf-459d1493d1df\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zmswq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382139 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/873d11a3-8ce7-483a-9496-18ce7ddc339c-images\") pod \"machine-config-operator-67c9d58cbb-97496\" (UID: \"873d11a3-8ce7-483a-9496-18ce7ddc339c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-97496" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382163 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b18dee05-6423-4857-95c5-63d2a976e19f-node-pullsecrets\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382183 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/936adeed-5876-49da-b102-8187f5bc998a-config\") pod \"authentication-operator-7f5c659b84-n2dtl\" (UID: \"936adeed-5876-49da-b102-8187f5bc998a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-n2dtl" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382221 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c1eb51c7-ee2f-4230-929d-62d6608eca89-audit-policies\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382243 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382260 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7ff95d2f-84b0-4ead-ab7d-65268a250ede-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-f9vts\" (UID: \"7ff95d2f-84b0-4ead-ab7d-65268a250ede\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f9vts" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382276 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c7d5497-9496-4ba6-8f07-95f5d955d403-serving-cert\") pod \"kube-apiserver-operator-575994946d-rhc6b\" (UID: \"7c7d5497-9496-4ba6-8f07-95f5d955d403\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rhc6b" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382313 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/be2bed85-ec40-4cd3-bf51-8e7ed0111e6f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-96jl7\" (UID: \"be2bed85-ec40-4cd3-bf51-8e7ed0111e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96jl7" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382333 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/086c90e6-e51d-42dc-be10-5df7ebaa5e16-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-2ldq5\" (UID: \"086c90e6-e51d-42dc-be10-5df7ebaa5e16\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2ldq5" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382351 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c023721-040d-42ad-b8f7-6c190a17f193-config\") pod \"kube-storage-version-migrator-operator-565b79b866-gjkxw\" (UID: \"2c023721-040d-42ad-b8f7-6c190a17f193\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gjkxw" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382389 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d93df320-4284-49f0-b63d-ba8a86943f2e-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-59jn5\" (UID: \"d93df320-4284-49f0-b63d-ba8a86943f2e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382418 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/086c90e6-e51d-42dc-be10-5df7ebaa5e16-config\") pod \"openshift-controller-manager-operator-686468bdd5-2ldq5\" (UID: \"086c90e6-e51d-42dc-be10-5df7ebaa5e16\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2ldq5" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382434 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0470d1dc-849c-40d7-9a25-efb425c4e111-etcd-service-ca\") pod \"etcd-operator-69b85846b6-7mzzj\" (UID: \"0470d1dc-849c-40d7-9a25-efb425c4e111\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7mzzj" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382475 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24225\" (UniqueName: \"kubernetes.io/projected/0470d1dc-849c-40d7-9a25-efb425c4e111-kube-api-access-24225\") pod \"etcd-operator-69b85846b6-7mzzj\" (UID: \"0470d1dc-849c-40d7-9a25-efb425c4e111\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7mzzj" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382498 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d93df320-4284-49f0-b63d-ba8a86943f2e-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-59jn5\" (UID: \"d93df320-4284-49f0-b63d-ba8a86943f2e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382597 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382656 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5zdjl\" (UniqueName: \"kubernetes.io/projected/c1eb51c7-ee2f-4230-929d-62d6608eca89-kube-api-access-5zdjl\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382673 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-client-ca\") pod \"route-controller-manager-776cdc94d6-6sr6w\" (UID: \"1dcc8c3a-74e3-404d-8f0f-cec0001cf476\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382769 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/be2bed85-ec40-4cd3-bf51-8e7ed0111e6f-tmp\") pod \"cluster-image-registry-operator-86c45576b9-96jl7\" (UID: \"be2bed85-ec40-4cd3-bf51-8e7ed0111e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96jl7" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382803 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/be2bed85-ec40-4cd3-bf51-8e7ed0111e6f-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-96jl7\" (UID: \"be2bed85-ec40-4cd3-bf51-8e7ed0111e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96jl7" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382827 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0470d1dc-849c-40d7-9a25-efb425c4e111-etcd-client\") pod \"etcd-operator-69b85846b6-7mzzj\" (UID: \"0470d1dc-849c-40d7-9a25-efb425c4e111\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7mzzj" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382845 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a708af6-a88c-47e1-85cf-8512edab0a65-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-2cdrs\" (UID: \"1a708af6-a88c-47e1-85cf-8512edab0a65\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2cdrs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382863 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/873d11a3-8ce7-483a-9496-18ce7ddc339c-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-97496\" (UID: \"873d11a3-8ce7-483a-9496-18ce7ddc339c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-97496" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382913 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b18dee05-6423-4857-95c5-63d2a976e19f-etcd-client\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382939 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/936adeed-5876-49da-b102-8187f5bc998a-serving-cert\") pod \"authentication-operator-7f5c659b84-n2dtl\" (UID: \"936adeed-5876-49da-b102-8187f5bc998a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-n2dtl" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382961 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b18dee05-6423-4857-95c5-63d2a976e19f-serving-cert\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382978 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.382996 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.383015 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.383035 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j5ld\" (UniqueName: \"kubernetes.io/projected/2c023721-040d-42ad-b8f7-6c190a17f193-kube-api-access-5j5ld\") pod \"kube-storage-version-migrator-operator-565b79b866-gjkxw\" (UID: \"2c023721-040d-42ad-b8f7-6c190a17f193\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gjkxw" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.383053 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/926c0a09-eb65-428f-9fd5-9c7c6c80799d-profile-collector-cert\") pod \"olm-operator-5cdf44d969-gg5st\" (UID: \"926c0a09-eb65-428f-9fd5-9c7c6c80799d\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-gg5st" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.383062 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-config\") pod \"route-controller-manager-776cdc94d6-6sr6w\" (UID: \"1dcc8c3a-74e3-404d-8f0f-cec0001cf476\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.383073 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/086c90e6-e51d-42dc-be10-5df7ebaa5e16-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-2ldq5\" (UID: \"086c90e6-e51d-42dc-be10-5df7ebaa5e16\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2ldq5" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.383151 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0470d1dc-849c-40d7-9a25-efb425c4e111-tmp-dir\") pod \"etcd-operator-69b85846b6-7mzzj\" (UID: \"0470d1dc-849c-40d7-9a25-efb425c4e111\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7mzzj" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.383180 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a708af6-a88c-47e1-85cf-8512edab0a65-config\") pod \"kube-controller-manager-operator-69d5f845f8-2cdrs\" (UID: \"1a708af6-a88c-47e1-85cf-8512edab0a65\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2cdrs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.383205 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1a708af6-a88c-47e1-85cf-8512edab0a65-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-2cdrs\" (UID: \"1a708af6-a88c-47e1-85cf-8512edab0a65\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2cdrs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.383246 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b18dee05-6423-4857-95c5-63d2a976e19f-image-import-ca\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.383272 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ba61487-45ca-44b7-aaed-0faa630aaa88-config\") pod \"openshift-apiserver-operator-846cbfc458-wwqgc\" (UID: \"6ba61487-45ca-44b7-aaed-0faa630aaa88\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wwqgc" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.383299 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-serving-cert\") pod \"route-controller-manager-776cdc94d6-6sr6w\" (UID: \"1dcc8c3a-74e3-404d-8f0f-cec0001cf476\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.383370 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b18dee05-6423-4857-95c5-63d2a976e19f-audit\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.383399 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pb5hj\" (UniqueName: \"kubernetes.io/projected/b44ede31-5627-4422-b319-14db754817f4-kube-api-access-pb5hj\") pod \"console-operator-67c89758df-mjn4v\" (UID: \"b44ede31-5627-4422-b319-14db754817f4\") " pod="openshift-console-operator/console-operator-67c89758df-mjn4v" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.383422 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0470d1dc-849c-40d7-9a25-efb425c4e111-etcd-ca\") pod \"etcd-operator-69b85846b6-7mzzj\" (UID: \"0470d1dc-849c-40d7-9a25-efb425c4e111\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7mzzj" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.383449 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/926c0a09-eb65-428f-9fd5-9c7c6c80799d-srv-cert\") pod \"olm-operator-5cdf44d969-gg5st\" (UID: \"926c0a09-eb65-428f-9fd5-9c7c6c80799d\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-gg5st" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.383477 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lllt8\" (UniqueName: \"kubernetes.io/projected/086c90e6-e51d-42dc-be10-5df7ebaa5e16-kube-api-access-lllt8\") pod \"openshift-controller-manager-operator-686468bdd5-2ldq5\" (UID: \"086c90e6-e51d-42dc-be10-5df7ebaa5e16\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2ldq5" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.383506 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tvkq\" (UniqueName: \"kubernetes.io/projected/7ff95d2f-84b0-4ead-ab7d-65268a250ede-kube-api-access-7tvkq\") pod \"ingress-operator-6b9cb4dbcf-f9vts\" (UID: \"7ff95d2f-84b0-4ead-ab7d-65268a250ede\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f9vts" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.383530 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c023721-040d-42ad-b8f7-6c190a17f193-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-gjkxw\" (UID: \"2c023721-040d-42ad-b8f7-6c190a17f193\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gjkxw" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.383559 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zz2dh\" (UniqueName: \"kubernetes.io/projected/4498876a-5953-499f-aa71-6899b8529dcf-kube-api-access-zz2dh\") pod \"downloads-747b44746d-64rgr\" (UID: \"4498876a-5953-499f-aa71-6899b8529dcf\") " pod="openshift-console/downloads-747b44746d-64rgr" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.383587 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.383614 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.383652 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bvlxh\" (UniqueName: \"kubernetes.io/projected/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-kube-api-access-bvlxh\") pod \"route-controller-manager-776cdc94d6-6sr6w\" (UID: \"1dcc8c3a-74e3-404d-8f0f-cec0001cf476\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.383681 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/be2bed85-ec40-4cd3-bf51-8e7ed0111e6f-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-96jl7\" (UID: \"be2bed85-ec40-4cd3-bf51-8e7ed0111e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96jl7" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.383712 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwq45\" (UniqueName: \"kubernetes.io/projected/be2bed85-ec40-4cd3-bf51-8e7ed0111e6f-kube-api-access-vwq45\") pod \"cluster-image-registry-operator-86c45576b9-96jl7\" (UID: \"be2bed85-ec40-4cd3-bf51-8e7ed0111e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96jl7" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.383739 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7ff95d2f-84b0-4ead-ab7d-65268a250ede-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-f9vts\" (UID: \"7ff95d2f-84b0-4ead-ab7d-65268a250ede\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f9vts" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.383907 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/936adeed-5876-49da-b102-8187f5bc998a-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-n2dtl\" (UID: \"936adeed-5876-49da-b102-8187f5bc998a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-n2dtl" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.383937 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c7d5497-9496-4ba6-8f07-95f5d955d403-config\") pod \"kube-apiserver-operator-575994946d-rhc6b\" (UID: \"7c7d5497-9496-4ba6-8f07-95f5d955d403\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rhc6b" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.384118 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b18dee05-6423-4857-95c5-63d2a976e19f-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.384159 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b18dee05-6423-4857-95c5-63d2a976e19f-encryption-config\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.384180 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7hgwf\" (UniqueName: \"kubernetes.io/projected/b18dee05-6423-4857-95c5-63d2a976e19f-kube-api-access-7hgwf\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.384202 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b18dee05-6423-4857-95c5-63d2a976e19f-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.384224 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ba61487-45ca-44b7-aaed-0faa630aaa88-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-wwqgc\" (UID: \"6ba61487-45ca-44b7-aaed-0faa630aaa88\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wwqgc" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.384245 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.384269 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-26dbv\" (UniqueName: \"kubernetes.io/projected/42d6fb86-e6fd-4b77-b921-d62cd5b6e825-kube-api-access-26dbv\") pod \"image-pruner-29489760-jn9bq\" (UID: \"42d6fb86-e6fd-4b77-b921-d62cd5b6e825\") " pod="openshift-image-registry/image-pruner-29489760-jn9bq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.384287 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-tmp\") pod \"route-controller-manager-776cdc94d6-6sr6w\" (UID: \"1dcc8c3a-74e3-404d-8f0f-cec0001cf476\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.384304 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b44ede31-5627-4422-b319-14db754817f4-serving-cert\") pod \"console-operator-67c89758df-mjn4v\" (UID: \"b44ede31-5627-4422-b319-14db754817f4\") " pod="openshift-console-operator/console-operator-67c89758df-mjn4v" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.384340 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c1eb51c7-ee2f-4230-929d-62d6608eca89-audit-dir\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.384360 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7c7d5497-9496-4ba6-8f07-95f5d955d403-tmp-dir\") pod \"kube-apiserver-operator-575994946d-rhc6b\" (UID: \"7c7d5497-9496-4ba6-8f07-95f5d955d403\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rhc6b" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.384395 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/936adeed-5876-49da-b102-8187f5bc998a-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-n2dtl\" (UID: \"936adeed-5876-49da-b102-8187f5bc998a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-n2dtl" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.384419 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mgmm2\" (UniqueName: \"kubernetes.io/projected/6ba61487-45ca-44b7-aaed-0faa630aaa88-kube-api-access-mgmm2\") pod \"openshift-apiserver-operator-846cbfc458-wwqgc\" (UID: \"6ba61487-45ca-44b7-aaed-0faa630aaa88\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wwqgc" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.384435 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b44ede31-5627-4422-b319-14db754817f4-trusted-ca\") pod \"console-operator-67c89758df-mjn4v\" (UID: \"b44ede31-5627-4422-b319-14db754817f4\") " pod="openshift-console-operator/console-operator-67c89758df-mjn4v" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.384454 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cecc62a2-1a5f-4b0f-95bf-459d1493d1df-available-featuregates\") pod \"openshift-config-operator-5777786469-zmswq\" (UID: \"cecc62a2-1a5f-4b0f-95bf-459d1493d1df\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zmswq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.384500 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qkg29\" (UniqueName: \"kubernetes.io/projected/936adeed-5876-49da-b102-8187f5bc998a-kube-api-access-qkg29\") pod \"authentication-operator-7f5c659b84-n2dtl\" (UID: \"936adeed-5876-49da-b102-8187f5bc998a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-n2dtl" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.384518 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.384558 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b18dee05-6423-4857-95c5-63d2a976e19f-audit-dir\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.384620 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b18dee05-6423-4857-95c5-63d2a976e19f-audit-dir\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.384641 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b44ede31-5627-4422-b319-14db754817f4-config\") pod \"console-operator-67c89758df-mjn4v\" (UID: \"b44ede31-5627-4422-b319-14db754817f4\") " pod="openshift-console-operator/console-operator-67c89758df-mjn4v" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.385148 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c1eb51c7-ee2f-4230-929d-62d6608eca89-audit-dir\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.385460 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/936adeed-5876-49da-b102-8187f5bc998a-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-n2dtl\" (UID: \"936adeed-5876-49da-b102-8187f5bc998a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-n2dtl" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.385502 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c7d5497-9496-4ba6-8f07-95f5d955d403-kube-api-access\") pod \"kube-apiserver-operator-575994946d-rhc6b\" (UID: \"7c7d5497-9496-4ba6-8f07-95f5d955d403\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rhc6b" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.385526 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6s2k\" (UniqueName: \"kubernetes.io/projected/873d11a3-8ce7-483a-9496-18ce7ddc339c-kube-api-access-z6s2k\") pod \"machine-config-operator-67c9d58cbb-97496\" (UID: \"873d11a3-8ce7-483a-9496-18ce7ddc339c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-97496" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.385574 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7ff95d2f-84b0-4ead-ab7d-65268a250ede-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-f9vts\" (UID: \"7ff95d2f-84b0-4ead-ab7d-65268a250ede\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f9vts" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.385608 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbx2w\" (UniqueName: \"kubernetes.io/projected/926c0a09-eb65-428f-9fd5-9c7c6c80799d-kube-api-access-qbx2w\") pod \"olm-operator-5cdf44d969-gg5st\" (UID: \"926c0a09-eb65-428f-9fd5-9c7c6c80799d\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-gg5st" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.385635 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a708af6-a88c-47e1-85cf-8512edab0a65-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-2cdrs\" (UID: \"1a708af6-a88c-47e1-85cf-8512edab0a65\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2cdrs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.385665 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/873d11a3-8ce7-483a-9496-18ce7ddc339c-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-97496\" (UID: \"873d11a3-8ce7-483a-9496-18ce7ddc339c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-97496" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.385980 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b18dee05-6423-4857-95c5-63d2a976e19f-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.386270 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b18dee05-6423-4857-95c5-63d2a976e19f-image-import-ca\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.387748 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cecc62a2-1a5f-4b0f-95bf-459d1493d1df-available-featuregates\") pod \"openshift-config-operator-5777786469-zmswq\" (UID: \"cecc62a2-1a5f-4b0f-95bf-459d1493d1df\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zmswq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.387769 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ba61487-45ca-44b7-aaed-0faa630aaa88-config\") pod \"openshift-apiserver-operator-846cbfc458-wwqgc\" (UID: \"6ba61487-45ca-44b7-aaed-0faa630aaa88\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wwqgc" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.387914 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.388210 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6st9d"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.388328 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b18dee05-6423-4857-95c5-63d2a976e19f-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.388706 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b18dee05-6423-4857-95c5-63d2a976e19f-config\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.388936 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/936adeed-5876-49da-b102-8187f5bc998a-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-n2dtl\" (UID: \"936adeed-5876-49da-b102-8187f5bc998a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-n2dtl" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.389300 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b18dee05-6423-4857-95c5-63d2a976e19f-audit\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.389306 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.389520 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c1eb51c7-ee2f-4230-929d-62d6608eca89-audit-policies\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.389854 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/42d6fb86-e6fd-4b77-b921-d62cd5b6e825-serviceca\") pod \"image-pruner-29489760-jn9bq\" (UID: \"42d6fb86-e6fd-4b77-b921-d62cd5b6e825\") " pod="openshift-image-registry/image-pruner-29489760-jn9bq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.390549 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.391027 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ba61487-45ca-44b7-aaed-0faa630aaa88-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-wwqgc\" (UID: \"6ba61487-45ca-44b7-aaed-0faa630aaa88\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wwqgc" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.391358 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b18dee05-6423-4857-95c5-63d2a976e19f-encryption-config\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.391812 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b44ede31-5627-4422-b319-14db754817f4-trusted-ca\") pod \"console-operator-67c89758df-mjn4v\" (UID: \"b44ede31-5627-4422-b319-14db754817f4\") " pod="openshift-console-operator/console-operator-67c89758df-mjn4v" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.392027 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.392387 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b44ede31-5627-4422-b319-14db754817f4-config\") pod \"console-operator-67c89758df-mjn4v\" (UID: \"b44ede31-5627-4422-b319-14db754817f4\") " pod="openshift-console-operator/console-operator-67c89758df-mjn4v" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.393004 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-client-ca\") pod \"route-controller-manager-776cdc94d6-6sr6w\" (UID: \"1dcc8c3a-74e3-404d-8f0f-cec0001cf476\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.393413 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.393526 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.393709 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.394293 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b18dee05-6423-4857-95c5-63d2a976e19f-etcd-client\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.394302 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b18dee05-6423-4857-95c5-63d2a976e19f-node-pullsecrets\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.394683 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cecc62a2-1a5f-4b0f-95bf-459d1493d1df-serving-cert\") pod \"openshift-config-operator-5777786469-zmswq\" (UID: \"cecc62a2-1a5f-4b0f-95bf-459d1493d1df\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zmswq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.394908 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-serving-cert\") pod \"route-controller-manager-776cdc94d6-6sr6w\" (UID: \"1dcc8c3a-74e3-404d-8f0f-cec0001cf476\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.395030 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/936adeed-5876-49da-b102-8187f5bc998a-config\") pod \"authentication-operator-7f5c659b84-n2dtl\" (UID: \"936adeed-5876-49da-b102-8187f5bc998a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-n2dtl" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.395209 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-tmp\") pod \"route-controller-manager-776cdc94d6-6sr6w\" (UID: \"1dcc8c3a-74e3-404d-8f0f-cec0001cf476\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.395851 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.396600 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/936adeed-5876-49da-b102-8187f5bc998a-serving-cert\") pod \"authentication-operator-7f5c659b84-n2dtl\" (UID: \"936adeed-5876-49da-b102-8187f5bc998a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-n2dtl" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.397571 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b44ede31-5627-4422-b319-14db754817f4-serving-cert\") pod \"console-operator-67c89758df-mjn4v\" (UID: \"b44ede31-5627-4422-b319-14db754817f4\") " pod="openshift-console-operator/console-operator-67c89758df-mjn4v" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.397902 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b18dee05-6423-4857-95c5-63d2a976e19f-serving-cert\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.412044 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.413727 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.414138 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.414194 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.415371 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.431958 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.437144 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-ffjjk"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.438834 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-bf6bf" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.447157 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-lckdk"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.447215 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6st9d" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.447757 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-ffjjk" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.451621 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.452136 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-g92zh"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.452289 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-lckdk" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.457680 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-7fc7h"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.457819 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-g92zh" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.461127 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-9vz6c"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.461352 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-7fc7h" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.463812 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wk2r6"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.463941 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-9vz6c" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.471987 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.477282 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-lqkzh"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.477523 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wk2r6" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.491047 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c7d5497-9496-4ba6-8f07-95f5d955d403-kube-api-access\") pod \"kube-apiserver-operator-575994946d-rhc6b\" (UID: \"7c7d5497-9496-4ba6-8f07-95f5d955d403\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rhc6b" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.491096 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z6s2k\" (UniqueName: \"kubernetes.io/projected/873d11a3-8ce7-483a-9496-18ce7ddc339c-kube-api-access-z6s2k\") pod \"machine-config-operator-67c9d58cbb-97496\" (UID: \"873d11a3-8ce7-483a-9496-18ce7ddc339c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-97496" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.491138 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7ff95d2f-84b0-4ead-ab7d-65268a250ede-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-f9vts\" (UID: \"7ff95d2f-84b0-4ead-ab7d-65268a250ede\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f9vts" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.491156 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qbx2w\" (UniqueName: \"kubernetes.io/projected/926c0a09-eb65-428f-9fd5-9c7c6c80799d-kube-api-access-qbx2w\") pod \"olm-operator-5cdf44d969-gg5st\" (UID: \"926c0a09-eb65-428f-9fd5-9c7c6c80799d\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-gg5st" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.491175 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a708af6-a88c-47e1-85cf-8512edab0a65-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-2cdrs\" (UID: \"1a708af6-a88c-47e1-85cf-8512edab0a65\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2cdrs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.491195 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/873d11a3-8ce7-483a-9496-18ce7ddc339c-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-97496\" (UID: \"873d11a3-8ce7-483a-9496-18ce7ddc339c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-97496" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.491215 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be2bed85-ec40-4cd3-bf51-8e7ed0111e6f-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-96jl7\" (UID: \"be2bed85-ec40-4cd3-bf51-8e7ed0111e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96jl7" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.491233 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0470d1dc-849c-40d7-9a25-efb425c4e111-serving-cert\") pod \"etcd-operator-69b85846b6-7mzzj\" (UID: \"0470d1dc-849c-40d7-9a25-efb425c4e111\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7mzzj" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.491251 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7qzl7\" (UniqueName: \"kubernetes.io/projected/d93df320-4284-49f0-b63d-ba8a86943f2e-kube-api-access-7qzl7\") pod \"marketplace-operator-547dbd544d-59jn5\" (UID: \"d93df320-4284-49f0-b63d-ba8a86943f2e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.491282 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0470d1dc-849c-40d7-9a25-efb425c4e111-config\") pod \"etcd-operator-69b85846b6-7mzzj\" (UID: \"0470d1dc-849c-40d7-9a25-efb425c4e111\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7mzzj" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.491303 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/926c0a09-eb65-428f-9fd5-9c7c6c80799d-tmpfs\") pod \"olm-operator-5cdf44d969-gg5st\" (UID: \"926c0a09-eb65-428f-9fd5-9c7c6c80799d\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-gg5st" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.491324 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d93df320-4284-49f0-b63d-ba8a86943f2e-tmp\") pod \"marketplace-operator-547dbd544d-59jn5\" (UID: \"d93df320-4284-49f0-b63d-ba8a86943f2e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.491343 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/873d11a3-8ce7-483a-9496-18ce7ddc339c-images\") pod \"machine-config-operator-67c9d58cbb-97496\" (UID: \"873d11a3-8ce7-483a-9496-18ce7ddc339c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-97496" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.491368 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7ff95d2f-84b0-4ead-ab7d-65268a250ede-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-f9vts\" (UID: \"7ff95d2f-84b0-4ead-ab7d-65268a250ede\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f9vts" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.491386 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c7d5497-9496-4ba6-8f07-95f5d955d403-serving-cert\") pod \"kube-apiserver-operator-575994946d-rhc6b\" (UID: \"7c7d5497-9496-4ba6-8f07-95f5d955d403\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rhc6b" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.491843 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/926c0a09-eb65-428f-9fd5-9c7c6c80799d-tmpfs\") pod \"olm-operator-5cdf44d969-gg5st\" (UID: \"926c0a09-eb65-428f-9fd5-9c7c6c80799d\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-gg5st" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.492168 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d93df320-4284-49f0-b63d-ba8a86943f2e-tmp\") pod \"marketplace-operator-547dbd544d-59jn5\" (UID: \"d93df320-4284-49f0-b63d-ba8a86943f2e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.492401 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0470d1dc-849c-40d7-9a25-efb425c4e111-config\") pod \"etcd-operator-69b85846b6-7mzzj\" (UID: \"0470d1dc-849c-40d7-9a25-efb425c4e111\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7mzzj" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.492431 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.492505 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/be2bed85-ec40-4cd3-bf51-8e7ed0111e6f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-96jl7\" (UID: \"be2bed85-ec40-4cd3-bf51-8e7ed0111e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96jl7" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.492540 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/086c90e6-e51d-42dc-be10-5df7ebaa5e16-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-2ldq5\" (UID: \"086c90e6-e51d-42dc-be10-5df7ebaa5e16\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2ldq5" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.492558 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c023721-040d-42ad-b8f7-6c190a17f193-config\") pod \"kube-storage-version-migrator-operator-565b79b866-gjkxw\" (UID: \"2c023721-040d-42ad-b8f7-6c190a17f193\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gjkxw" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.492582 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d93df320-4284-49f0-b63d-ba8a86943f2e-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-59jn5\" (UID: \"d93df320-4284-49f0-b63d-ba8a86943f2e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.492617 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/086c90e6-e51d-42dc-be10-5df7ebaa5e16-config\") pod \"openshift-controller-manager-operator-686468bdd5-2ldq5\" (UID: \"086c90e6-e51d-42dc-be10-5df7ebaa5e16\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2ldq5" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.492637 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0470d1dc-849c-40d7-9a25-efb425c4e111-etcd-service-ca\") pod \"etcd-operator-69b85846b6-7mzzj\" (UID: \"0470d1dc-849c-40d7-9a25-efb425c4e111\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7mzzj" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.492660 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-24225\" (UniqueName: \"kubernetes.io/projected/0470d1dc-849c-40d7-9a25-efb425c4e111-kube-api-access-24225\") pod \"etcd-operator-69b85846b6-7mzzj\" (UID: \"0470d1dc-849c-40d7-9a25-efb425c4e111\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7mzzj" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.492678 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d93df320-4284-49f0-b63d-ba8a86943f2e-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-59jn5\" (UID: \"d93df320-4284-49f0-b63d-ba8a86943f2e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.492717 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/be2bed85-ec40-4cd3-bf51-8e7ed0111e6f-tmp\") pod \"cluster-image-registry-operator-86c45576b9-96jl7\" (UID: \"be2bed85-ec40-4cd3-bf51-8e7ed0111e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96jl7" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.492740 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/be2bed85-ec40-4cd3-bf51-8e7ed0111e6f-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-96jl7\" (UID: \"be2bed85-ec40-4cd3-bf51-8e7ed0111e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96jl7" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.492759 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0470d1dc-849c-40d7-9a25-efb425c4e111-etcd-client\") pod \"etcd-operator-69b85846b6-7mzzj\" (UID: \"0470d1dc-849c-40d7-9a25-efb425c4e111\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7mzzj" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.492779 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a708af6-a88c-47e1-85cf-8512edab0a65-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-2cdrs\" (UID: \"1a708af6-a88c-47e1-85cf-8512edab0a65\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2cdrs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.492797 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/873d11a3-8ce7-483a-9496-18ce7ddc339c-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-97496\" (UID: \"873d11a3-8ce7-483a-9496-18ce7ddc339c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-97496" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.492841 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5j5ld\" (UniqueName: \"kubernetes.io/projected/2c023721-040d-42ad-b8f7-6c190a17f193-kube-api-access-5j5ld\") pod \"kube-storage-version-migrator-operator-565b79b866-gjkxw\" (UID: \"2c023721-040d-42ad-b8f7-6c190a17f193\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gjkxw" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.492858 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/926c0a09-eb65-428f-9fd5-9c7c6c80799d-profile-collector-cert\") pod \"olm-operator-5cdf44d969-gg5st\" (UID: \"926c0a09-eb65-428f-9fd5-9c7c6c80799d\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-gg5st" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.492903 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/086c90e6-e51d-42dc-be10-5df7ebaa5e16-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-2ldq5\" (UID: \"086c90e6-e51d-42dc-be10-5df7ebaa5e16\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2ldq5" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.492921 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0470d1dc-849c-40d7-9a25-efb425c4e111-tmp-dir\") pod \"etcd-operator-69b85846b6-7mzzj\" (UID: \"0470d1dc-849c-40d7-9a25-efb425c4e111\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7mzzj" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.492940 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a708af6-a88c-47e1-85cf-8512edab0a65-config\") pod \"kube-controller-manager-operator-69d5f845f8-2cdrs\" (UID: \"1a708af6-a88c-47e1-85cf-8512edab0a65\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2cdrs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.492956 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1a708af6-a88c-47e1-85cf-8512edab0a65-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-2cdrs\" (UID: \"1a708af6-a88c-47e1-85cf-8512edab0a65\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2cdrs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.492992 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0470d1dc-849c-40d7-9a25-efb425c4e111-etcd-ca\") pod \"etcd-operator-69b85846b6-7mzzj\" (UID: \"0470d1dc-849c-40d7-9a25-efb425c4e111\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7mzzj" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.492994 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/873d11a3-8ce7-483a-9496-18ce7ddc339c-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-97496\" (UID: \"873d11a3-8ce7-483a-9496-18ce7ddc339c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-97496" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.493010 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/926c0a09-eb65-428f-9fd5-9c7c6c80799d-srv-cert\") pod \"olm-operator-5cdf44d969-gg5st\" (UID: \"926c0a09-eb65-428f-9fd5-9c7c6c80799d\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-gg5st" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.493033 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lllt8\" (UniqueName: \"kubernetes.io/projected/086c90e6-e51d-42dc-be10-5df7ebaa5e16-kube-api-access-lllt8\") pod \"openshift-controller-manager-operator-686468bdd5-2ldq5\" (UID: \"086c90e6-e51d-42dc-be10-5df7ebaa5e16\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2ldq5" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.493051 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7tvkq\" (UniqueName: \"kubernetes.io/projected/7ff95d2f-84b0-4ead-ab7d-65268a250ede-kube-api-access-7tvkq\") pod \"ingress-operator-6b9cb4dbcf-f9vts\" (UID: \"7ff95d2f-84b0-4ead-ab7d-65268a250ede\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f9vts" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.493067 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c023721-040d-42ad-b8f7-6c190a17f193-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-gjkxw\" (UID: \"2c023721-040d-42ad-b8f7-6c190a17f193\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gjkxw" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.493090 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/be2bed85-ec40-4cd3-bf51-8e7ed0111e6f-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-96jl7\" (UID: \"be2bed85-ec40-4cd3-bf51-8e7ed0111e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96jl7" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.493108 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vwq45\" (UniqueName: \"kubernetes.io/projected/be2bed85-ec40-4cd3-bf51-8e7ed0111e6f-kube-api-access-vwq45\") pod \"cluster-image-registry-operator-86c45576b9-96jl7\" (UID: \"be2bed85-ec40-4cd3-bf51-8e7ed0111e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96jl7" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.493125 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7ff95d2f-84b0-4ead-ab7d-65268a250ede-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-f9vts\" (UID: \"7ff95d2f-84b0-4ead-ab7d-65268a250ede\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f9vts" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.493145 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c7d5497-9496-4ba6-8f07-95f5d955d403-config\") pod \"kube-apiserver-operator-575994946d-rhc6b\" (UID: \"7c7d5497-9496-4ba6-8f07-95f5d955d403\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rhc6b" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.493209 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7c7d5497-9496-4ba6-8f07-95f5d955d403-tmp-dir\") pod \"kube-apiserver-operator-575994946d-rhc6b\" (UID: \"7c7d5497-9496-4ba6-8f07-95f5d955d403\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rhc6b" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.494141 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7c7d5497-9496-4ba6-8f07-95f5d955d403-tmp-dir\") pod \"kube-apiserver-operator-575994946d-rhc6b\" (UID: \"7c7d5497-9496-4ba6-8f07-95f5d955d403\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rhc6b" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.494434 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0470d1dc-849c-40d7-9a25-efb425c4e111-etcd-service-ca\") pod \"etcd-operator-69b85846b6-7mzzj\" (UID: \"0470d1dc-849c-40d7-9a25-efb425c4e111\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7mzzj" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.494488 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be2bed85-ec40-4cd3-bf51-8e7ed0111e6f-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-96jl7\" (UID: \"be2bed85-ec40-4cd3-bf51-8e7ed0111e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96jl7" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.494517 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0470d1dc-849c-40d7-9a25-efb425c4e111-serving-cert\") pod \"etcd-operator-69b85846b6-7mzzj\" (UID: \"0470d1dc-849c-40d7-9a25-efb425c4e111\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7mzzj" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.494622 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/be2bed85-ec40-4cd3-bf51-8e7ed0111e6f-tmp\") pod \"cluster-image-registry-operator-86c45576b9-96jl7\" (UID: \"be2bed85-ec40-4cd3-bf51-8e7ed0111e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96jl7" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.495082 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/086c90e6-e51d-42dc-be10-5df7ebaa5e16-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-2ldq5\" (UID: \"086c90e6-e51d-42dc-be10-5df7ebaa5e16\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2ldq5" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.494956 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/be2bed85-ec40-4cd3-bf51-8e7ed0111e6f-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-96jl7\" (UID: \"be2bed85-ec40-4cd3-bf51-8e7ed0111e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96jl7" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.495473 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0470d1dc-849c-40d7-9a25-efb425c4e111-tmp-dir\") pod \"etcd-operator-69b85846b6-7mzzj\" (UID: \"0470d1dc-849c-40d7-9a25-efb425c4e111\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7mzzj" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.495522 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1a708af6-a88c-47e1-85cf-8512edab0a65-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-2cdrs\" (UID: \"1a708af6-a88c-47e1-85cf-8512edab0a65\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2cdrs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.496043 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0470d1dc-849c-40d7-9a25-efb425c4e111-etcd-ca\") pod \"etcd-operator-69b85846b6-7mzzj\" (UID: \"0470d1dc-849c-40d7-9a25-efb425c4e111\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7mzzj" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.498040 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/926c0a09-eb65-428f-9fd5-9c7c6c80799d-profile-collector-cert\") pod \"olm-operator-5cdf44d969-gg5st\" (UID: \"926c0a09-eb65-428f-9fd5-9c7c6c80799d\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-gg5st" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.499008 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/be2bed85-ec40-4cd3-bf51-8e7ed0111e6f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-96jl7\" (UID: \"be2bed85-ec40-4cd3-bf51-8e7ed0111e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96jl7" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.499727 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0470d1dc-849c-40d7-9a25-efb425c4e111-etcd-client\") pod \"etcd-operator-69b85846b6-7mzzj\" (UID: \"0470d1dc-849c-40d7-9a25-efb425c4e111\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7mzzj" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.500139 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/926c0a09-eb65-428f-9fd5-9c7c6c80799d-srv-cert\") pod \"olm-operator-5cdf44d969-gg5st\" (UID: \"926c0a09-eb65-428f-9fd5-9c7c6c80799d\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-gg5st" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.512067 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.514726 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/086c90e6-e51d-42dc-be10-5df7ebaa5e16-config\") pod \"openshift-controller-manager-operator-686468bdd5-2ldq5\" (UID: \"086c90e6-e51d-42dc-be10-5df7ebaa5e16\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2ldq5" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.520482 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-r75qz"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.520634 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-lqkzh" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.533716 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.534942 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-r75qz" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.534994 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.535026 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-hvxpc"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.538568 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-4gmk9"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.538598 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-klq76"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.538724 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541337 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-lpd5s"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541357 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-64rgr"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541368 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541378 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29489760-jn9bq"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541387 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-zmswq"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541397 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-wsw2x"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541406 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-5hcgj"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541416 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-59jn5"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541427 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-xqx9c"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541436 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-flbvs"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541447 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gjkxw"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541456 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pwh7s"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541465 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-mjn4v"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541565 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2ldq5"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541581 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-n2dtl"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541592 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f9vts"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541605 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-7mzzj"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541614 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wwqgc"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541622 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jxbv4"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541630 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-bf6bf"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541638 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rhc6b"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541646 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-gg5st"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541656 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-96jl7"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541590 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-klq76" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541665 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-r75qz"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541793 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489760-g5ptf"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541810 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-lqkzh"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541822 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-97496"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541834 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6st9d"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541847 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2cdrs"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541860 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wk2r6"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.541874 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-46x2w"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.545897 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-lckdk"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.545923 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7mhc8"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.546075 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-46x2w" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.549566 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-46x2w"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.549588 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-7fc7h"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.549598 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-ffjjk"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.549608 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-g92zh"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.549639 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-9vz6c"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.549658 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7mhc8"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.549675 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-gfrwv"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.549759 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-7mhc8" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.551776 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.556173 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-gfrwv"] Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.556417 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-gfrwv" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.557708 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/086c90e6-e51d-42dc-be10-5df7ebaa5e16-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-2ldq5\" (UID: \"086c90e6-e51d-42dc-be10-5df7ebaa5e16\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2ldq5" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.572006 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.592008 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.605093 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c7d5497-9496-4ba6-8f07-95f5d955d403-serving-cert\") pod \"kube-apiserver-operator-575994946d-rhc6b\" (UID: \"7c7d5497-9496-4ba6-8f07-95f5d955d403\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rhc6b" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.611950 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.631692 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.652484 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.655818 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c7d5497-9496-4ba6-8f07-95f5d955d403-config\") pod \"kube-apiserver-operator-575994946d-rhc6b\" (UID: \"7c7d5497-9496-4ba6-8f07-95f5d955d403\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rhc6b" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.672422 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.678876 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a708af6-a88c-47e1-85cf-8512edab0a65-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-2cdrs\" (UID: \"1a708af6-a88c-47e1-85cf-8512edab0a65\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2cdrs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.691973 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.712247 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.732508 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.736851 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a708af6-a88c-47e1-85cf-8512edab0a65-config\") pod \"kube-controller-manager-operator-69d5f845f8-2cdrs\" (UID: \"1a708af6-a88c-47e1-85cf-8512edab0a65\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2cdrs" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.753844 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.778872 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.782913 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7ff95d2f-84b0-4ead-ab7d-65268a250ede-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-f9vts\" (UID: \"7ff95d2f-84b0-4ead-ab7d-65268a250ede\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f9vts" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.793698 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.812922 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.848830 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4g9h\" (UniqueName: \"kubernetes.io/projected/f147b0c8-28b8-4818-a30c-f6aa0da709db-kube-api-access-g4g9h\") pod \"machine-api-operator-755bb95488-lpd5s\" (UID: \"f147b0c8-28b8-4818-a30c-f6aa0da709db\") " pod="openshift-machine-api/machine-api-operator-755bb95488-lpd5s" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.869414 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhbp7\" (UniqueName: \"kubernetes.io/projected/add7b84d-7f90-4850-9568-c7f3755404ca-kube-api-access-zhbp7\") pod \"apiserver-8596bd845d-kq9jq\" (UID: \"add7b84d-7f90-4850-9568-c7f3755404ca\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.871964 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.875306 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7ff95d2f-84b0-4ead-ab7d-65268a250ede-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-f9vts\" (UID: \"7ff95d2f-84b0-4ead-ab7d-65268a250ede\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f9vts" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.906906 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-98r8x\" (UniqueName: \"kubernetes.io/projected/1a0a54b9-6415-4f43-97d3-1b95793389ec-kube-api-access-98r8x\") pod \"machine-approver-54c688565-6wxtb\" (UID: \"1a0a54b9-6415-4f43-97d3-1b95793389ec\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6wxtb" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.913645 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.914661 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c023721-040d-42ad-b8f7-6c190a17f193-config\") pod \"kube-storage-version-migrator-operator-565b79b866-gjkxw\" (UID: \"2c023721-040d-42ad-b8f7-6c190a17f193\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gjkxw" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.932418 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.953219 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.959572 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c023721-040d-42ad-b8f7-6c190a17f193-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-gjkxw\" (UID: \"2c023721-040d-42ad-b8f7-6c190a17f193\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gjkxw" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.972085 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.972806 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.992255 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.996411 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-6wxtb" Jan 26 00:11:42 crc kubenswrapper[5107]: I0126 00:11:42.997156 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-lpd5s" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.032421 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.045012 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/873d11a3-8ce7-483a-9496-18ce7ddc339c-images\") pod \"machine-config-operator-67c9d58cbb-97496\" (UID: \"873d11a3-8ce7-483a-9496-18ce7ddc339c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-97496" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.048928 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvkwc\" (UniqueName: \"kubernetes.io/projected/1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f-kube-api-access-gvkwc\") pod \"console-64d44f6ddf-4gmk9\" (UID: \"1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f\") " pod="openshift-console/console-64d44f6ddf-4gmk9" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.052656 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.072408 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.079136 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/873d11a3-8ce7-483a-9496-18ce7ddc339c-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-97496\" (UID: \"873d11a3-8ce7-483a-9496-18ce7ddc339c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-97496" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.092315 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.107413 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d93df320-4284-49f0-b63d-ba8a86943f2e-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-59jn5\" (UID: \"d93df320-4284-49f0-b63d-ba8a86943f2e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.112169 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.182415 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.182875 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.191943 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.192199 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.289918 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d93df320-4284-49f0-b63d-ba8a86943f2e-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-59jn5\" (UID: \"d93df320-4284-49f0-b63d-ba8a86943f2e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.294746 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-4gmk9" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.315097 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.315493 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.315520 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.315503 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.317409 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.318744 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.336034 5107 request.go:752] "Waited before sending request" delay="1.000771602s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.340222 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.351506 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.381789 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.400974 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.424582 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.433808 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.453741 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.472062 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.491741 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.496293 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-lpd5s"] Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.511969 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.537740 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq"] Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.539937 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.553430 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.571661 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.610033 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.637382 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqps7\" (UniqueName: \"kubernetes.io/projected/cecc62a2-1a5f-4b0f-95bf-459d1493d1df-kube-api-access-bqps7\") pod \"openshift-config-operator-5777786469-zmswq\" (UID: \"cecc62a2-1a5f-4b0f-95bf-459d1493d1df\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zmswq" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.672444 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-6wxtb" event={"ID":"1a0a54b9-6415-4f43-97d3-1b95793389ec","Type":"ContainerStarted","Data":"def66a465b22b2b92ad35cac6ed60fdff598d8d7b33fa5b79b742a85e3bcea71"} Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.672491 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-6wxtb" event={"ID":"1a0a54b9-6415-4f43-97d3-1b95793389ec","Type":"ContainerStarted","Data":"11056a7b4cfdcef708dac0de82ebf000cbfceec52931f5a8111c52b577e4ff97"} Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.673750 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-lpd5s" event={"ID":"f147b0c8-28b8-4818-a30c-f6aa0da709db","Type":"ContainerStarted","Data":"ec64e4c479f0299aed7c0df08fd546ec6cf916f6cc0ce3d130cd87525541c1d0"} Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.674484 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" event={"ID":"add7b84d-7f90-4850-9568-c7f3755404ca","Type":"ContainerStarted","Data":"d8bdee910a52346f011cc009b768cbc211477a06dabc9efe3a95b368b2413446"} Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.681257 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-4gmk9"] Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.693001 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hgwf\" (UniqueName: \"kubernetes.io/projected/b18dee05-6423-4857-95c5-63d2a976e19f-kube-api-access-7hgwf\") pod \"apiserver-9ddfb9f55-flbvs\" (UID: \"b18dee05-6423-4857-95c5-63d2a976e19f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.701026 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz2dh\" (UniqueName: \"kubernetes.io/projected/4498876a-5953-499f-aa71-6899b8529dcf-kube-api-access-zz2dh\") pod \"downloads-747b44746d-64rgr\" (UID: \"4498876a-5953-499f-aa71-6899b8529dcf\") " pod="openshift-console/downloads-747b44746d-64rgr" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.711260 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvlxh\" (UniqueName: \"kubernetes.io/projected/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-kube-api-access-bvlxh\") pod \"route-controller-manager-776cdc94d6-6sr6w\" (UID: \"1dcc8c3a-74e3-404d-8f0f-cec0001cf476\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.744161 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.745356 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb5hj\" (UniqueName: \"kubernetes.io/projected/b44ede31-5627-4422-b319-14db754817f4-kube-api-access-pb5hj\") pod \"console-operator-67c89758df-mjn4v\" (UID: \"b44ede31-5627-4422-b319-14db754817f4\") " pod="openshift-console-operator/console-operator-67c89758df-mjn4v" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.745488 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-64rgr" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.745830 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.746284 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-zmswq" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.774251 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgmm2\" (UniqueName: \"kubernetes.io/projected/6ba61487-45ca-44b7-aaed-0faa630aaa88-kube-api-access-mgmm2\") pod \"openshift-apiserver-operator-846cbfc458-wwqgc\" (UID: \"6ba61487-45ca-44b7-aaed-0faa630aaa88\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wwqgc" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.785815 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkg29\" (UniqueName: \"kubernetes.io/projected/936adeed-5876-49da-b102-8187f5bc998a-kube-api-access-qkg29\") pod \"authentication-operator-7f5c659b84-n2dtl\" (UID: \"936adeed-5876-49da-b102-8187f5bc998a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-n2dtl" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.833537 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-26dbv\" (UniqueName: \"kubernetes.io/projected/42d6fb86-e6fd-4b77-b921-d62cd5b6e825-kube-api-access-26dbv\") pod \"image-pruner-29489760-jn9bq\" (UID: \"42d6fb86-e6fd-4b77-b921-d62cd5b6e825\") " pod="openshift-image-registry/image-pruner-29489760-jn9bq" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.850362 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.857566 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.865720 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zdjl\" (UniqueName: \"kubernetes.io/projected/c1eb51c7-ee2f-4230-929d-62d6608eca89-kube-api-access-5zdjl\") pod \"oauth-openshift-66458b6674-wsw2x\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.872065 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.892619 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.913541 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.935835 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.959207 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29489760-jn9bq" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.961970 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.971688 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.991138 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-mjn4v" Jan 26 00:11:43 crc kubenswrapper[5107]: I0126 00:11:43.994047 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.005215 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.013754 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.031792 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.050107 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wwqgc" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.052938 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.064433 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-n2dtl" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.072261 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.093684 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.114784 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.134642 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-zmswq"] Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.140554 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.155194 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.173323 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.199095 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.222049 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.232749 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.234257 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w"] Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.252781 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.264215 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-flbvs"] Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.286671 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-64rgr"] Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.302125 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c7d5497-9496-4ba6-8f07-95f5d955d403-kube-api-access\") pod \"kube-apiserver-operator-575994946d-rhc6b\" (UID: \"7c7d5497-9496-4ba6-8f07-95f5d955d403\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rhc6b" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.311925 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6s2k\" (UniqueName: \"kubernetes.io/projected/873d11a3-8ce7-483a-9496-18ce7ddc339c-kube-api-access-z6s2k\") pod \"machine-config-operator-67c9d58cbb-97496\" (UID: \"873d11a3-8ce7-483a-9496-18ce7ddc339c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-97496" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.335487 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbx2w\" (UniqueName: \"kubernetes.io/projected/926c0a09-eb65-428f-9fd5-9c7c6c80799d-kube-api-access-qbx2w\") pod \"olm-operator-5cdf44d969-gg5st\" (UID: \"926c0a09-eb65-428f-9fd5-9c7c6c80799d\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-gg5st" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.351437 5107 request.go:752] "Waited before sending request" delay="1.858262359s" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/serviceaccounts/kube-controller-manager-operator/token" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.362868 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qzl7\" (UniqueName: \"kubernetes.io/projected/d93df320-4284-49f0-b63d-ba8a86943f2e-kube-api-access-7qzl7\") pod \"marketplace-operator-547dbd544d-59jn5\" (UID: \"d93df320-4284-49f0-b63d-ba8a86943f2e\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.385926 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-gg5st" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.386021 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a708af6-a88c-47e1-85cf-8512edab0a65-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-2cdrs\" (UID: \"1a708af6-a88c-47e1-85cf-8512edab0a65\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2cdrs" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.399683 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j5ld\" (UniqueName: \"kubernetes.io/projected/2c023721-040d-42ad-b8f7-6c190a17f193-kube-api-access-5j5ld\") pod \"kube-storage-version-migrator-operator-565b79b866-gjkxw\" (UID: \"2c023721-040d-42ad-b8f7-6c190a17f193\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gjkxw" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.412649 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-24225\" (UniqueName: \"kubernetes.io/projected/0470d1dc-849c-40d7-9a25-efb425c4e111-kube-api-access-24225\") pod \"etcd-operator-69b85846b6-7mzzj\" (UID: \"0470d1dc-849c-40d7-9a25-efb425c4e111\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-7mzzj" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.422376 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rhc6b" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.426184 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lllt8\" (UniqueName: \"kubernetes.io/projected/086c90e6-e51d-42dc-be10-5df7ebaa5e16-kube-api-access-lllt8\") pod \"openshift-controller-manager-operator-686468bdd5-2ldq5\" (UID: \"086c90e6-e51d-42dc-be10-5df7ebaa5e16\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2ldq5" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.431164 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2cdrs" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.450145 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gjkxw" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.452421 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/be2bed85-ec40-4cd3-bf51-8e7ed0111e6f-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-96jl7\" (UID: \"be2bed85-ec40-4cd3-bf51-8e7ed0111e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96jl7" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.457256 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-97496" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.471591 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-wsw2x"] Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.471764 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.495507 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwq45\" (UniqueName: \"kubernetes.io/projected/be2bed85-ec40-4cd3-bf51-8e7ed0111e6f-kube-api-access-vwq45\") pod \"cluster-image-registry-operator-86c45576b9-96jl7\" (UID: \"be2bed85-ec40-4cd3-bf51-8e7ed0111e6f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96jl7" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.519338 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.533251 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.538399 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tvkq\" (UniqueName: \"kubernetes.io/projected/7ff95d2f-84b0-4ead-ab7d-65268a250ede-kube-api-access-7tvkq\") pod \"ingress-operator-6b9cb4dbcf-f9vts\" (UID: \"7ff95d2f-84b0-4ead-ab7d-65268a250ede\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f9vts" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.570929 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7ff95d2f-84b0-4ead-ab7d-65268a250ede-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-f9vts\" (UID: \"7ff95d2f-84b0-4ead-ab7d-65268a250ede\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f9vts" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.736386 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-7mzzj" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.736422 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2ldq5" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.742257 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.742826 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96jl7" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.744830 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f9vts" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.746952 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.747093 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.747236 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.747325 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.747248 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.747461 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.747752 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.747789 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.748374 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.749672 5107 generic.go:358] "Generic (PLEG): container finished" podID="add7b84d-7f90-4850-9568-c7f3755404ca" containerID="353421a5956f7aff568111ab661247af31ac8a611f8c5920a59d9bef15316d9e" exitCode=0 Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.749831 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" event={"ID":"add7b84d-7f90-4850-9568-c7f3755404ca","Type":"ContainerDied","Data":"353421a5956f7aff568111ab661247af31ac8a611f8c5920a59d9bef15316d9e"} Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.754795 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.758990 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w" event={"ID":"1dcc8c3a-74e3-404d-8f0f-cec0001cf476","Type":"ContainerStarted","Data":"656bbed1bfc0386c8b956b5989f88aadce22e0461cdd846f3ac9b434a35050cf"} Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.777030 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.780380 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-n2dtl"] Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.780432 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-mjn4v"] Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.792406 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29489760-jn9bq"] Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.792975 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.799661 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wwqgc"] Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.814407 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" event={"ID":"b18dee05-6423-4857-95c5-63d2a976e19f","Type":"ContainerStarted","Data":"aac7badfff52436dbbb58577c457a633919e9cced2ee05c3fe3fb58f65cde9cb"} Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.814582 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.815994 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-64rgr" event={"ID":"4498876a-5953-499f-aa71-6899b8529dcf","Type":"ContainerStarted","Data":"eb3175f750ad16e9baf9493f77c2aa3175e15d96eafde32c66467d17e3d85220"} Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.822539 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-6wxtb" event={"ID":"1a0a54b9-6415-4f43-97d3-1b95793389ec","Type":"ContainerStarted","Data":"fca000cb9bc6ba1b07011cc546ea181c23fd8e59ba9c675f2643d141ecdf8393"} Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.829113 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-zmswq" event={"ID":"cecc62a2-1a5f-4b0f-95bf-459d1493d1df","Type":"ContainerStarted","Data":"b71de5ef52784bfb207767ec7013b587b4eaf36e2032ffa9a00594643ced9bf3"} Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.851697 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-4gmk9" event={"ID":"1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f","Type":"ContainerStarted","Data":"8fbb3ba709b6fd05bd4f0bab5b9fb891a2a3178622f93d0bc4b4e607115126d0"} Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.851743 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-4gmk9" event={"ID":"1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f","Type":"ContainerStarted","Data":"67f0c78fbe0135d8c7b0e38960025577567e4e08c37ad0a32e597425ff73f407"} Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.863304 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-lpd5s" event={"ID":"f147b0c8-28b8-4818-a30c-f6aa0da709db","Type":"ContainerStarted","Data":"faa93a181bb0cf3f29e61657bbff132fe20367fbae85e656870987993f9a7bcc"} Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.863362 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-lpd5s" event={"ID":"f147b0c8-28b8-4818-a30c-f6aa0da709db","Type":"ContainerStarted","Data":"03ba27cac0ad6d4a69f58530321cfefcaa31d27e75a2318b963669a5aca41182"} Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.878716 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 26 00:11:44 crc kubenswrapper[5107]: I0126 00:11:44.972784 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.034658 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.035126 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.036953 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.039696 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.039844 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.040052 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.174052 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-tmp\") pod \"controller-manager-65b6cccf98-xqx9c\" (UID: \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.174383 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k944z\" (UniqueName: \"kubernetes.io/projected/e39cba7d-bc11-44ab-a079-c2b873d17ef9-kube-api-access-k944z\") pod \"package-server-manager-77f986bd66-pwh7s\" (UID: \"e39cba7d-bc11-44ab-a079-c2b873d17ef9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pwh7s" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.174477 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e75356d-8170-4619-9539-ea5e50c2b892-secret-volume\") pod \"collect-profiles-29489760-g5ptf\" (UID: \"8e75356d-8170-4619-9539-ea5e50c2b892\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-g5ptf" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.174508 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gwql\" (UniqueName: \"kubernetes.io/projected/39ce6772-6cb0-4cfd-afaa-47f5a73ede25-kube-api-access-4gwql\") pod \"service-ca-operator-5b9c976747-r75qz\" (UID: \"39ce6772-6cb0-4cfd-afaa-47f5a73ede25\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-r75qz" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.174606 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-config\") pod \"controller-manager-65b6cccf98-xqx9c\" (UID: \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.174658 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqg9b\" (UniqueName: \"kubernetes.io/projected/8e75356d-8170-4619-9539-ea5e50c2b892-kube-api-access-hqg9b\") pod \"collect-profiles-29489760-g5ptf\" (UID: \"8e75356d-8170-4619-9539-ea5e50c2b892\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-g5ptf" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.174694 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1ad6f093-e118-435a-9ebd-f7346da27676-tmp-dir\") pod \"dns-default-46x2w\" (UID: \"1ad6f093-e118-435a-9ebd-f7346da27676\") " pod="openshift-dns/dns-default-46x2w" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.174749 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ae7da3db-5cbd-40ff-adfb-417c0d055042-installation-pull-secrets\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.174789 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80801f36-b03c-44af-bbaa-4e9a962f9a30-metrics-certs\") pod \"router-default-68cf44c8b8-mbr9b\" (UID: \"80801f36-b03c-44af-bbaa-4e9a962f9a30\") " pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.174817 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea19e3ee-138c-4fc9-aa7f-c2c7747b3468-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-lckdk\" (UID: \"ea19e3ee-138c-4fc9-aa7f-c2c7747b3468\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-lckdk" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.174854 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ae7da3db-5cbd-40ff-adfb-417c0d055042-registry-certificates\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.174882 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lj8h\" (UniqueName: \"kubernetes.io/projected/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-kube-api-access-7lj8h\") pod \"controller-manager-65b6cccf98-xqx9c\" (UID: \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.174942 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ff2a0637-a303-4291-9db1-a2edaa44d952-certs\") pod \"machine-config-server-klq76\" (UID: \"ff2a0637-a303-4291-9db1-a2edaa44d952\") " pod="openshift-machine-config-operator/machine-config-server-klq76" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.174999 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b5528d4-4c62-46d7-89d9-3a6de1a8f546-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-ffjjk\" (UID: \"3b5528d4-4c62-46d7-89d9-3a6de1a8f546\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-ffjjk" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.175033 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-xqx9c\" (UID: \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.175076 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhrpd\" (UniqueName: \"kubernetes.io/projected/dd738239-7d02-47f2-aad8-bb51fbe73201-kube-api-access-mhrpd\") pod \"cluster-samples-operator-6b564684c8-6st9d\" (UID: \"dd738239-7d02-47f2-aad8-bb51fbe73201\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6st9d" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.175102 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkfqr\" (UniqueName: \"kubernetes.io/projected/c53ce89a-3e31-41ac-96d2-c4326f044986-kube-api-access-qkfqr\") pod \"csi-hostpathplugin-7mhc8\" (UID: \"c53ce89a-3e31-41ac-96d2-c4326f044986\") " pod="hostpath-provisioner/csi-hostpathplugin-7mhc8" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.175455 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71a4bfc8-fadc-4b4e-90c6-7e93ee88dbe7-apiservice-cert\") pod \"packageserver-7d4fc7d867-jxbv4\" (UID: \"71a4bfc8-fadc-4b4e-90c6-7e93ee88dbe7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jxbv4" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.175495 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b5528d4-4c62-46d7-89d9-3a6de1a8f546-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-ffjjk\" (UID: \"3b5528d4-4c62-46d7-89d9-3a6de1a8f546\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-ffjjk" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.175520 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c53ce89a-3e31-41ac-96d2-c4326f044986-csi-data-dir\") pod \"csi-hostpathplugin-7mhc8\" (UID: \"c53ce89a-3e31-41ac-96d2-c4326f044986\") " pod="hostpath-provisioner/csi-hostpathplugin-7mhc8" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.175576 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8781666d-7431-4ebc-aa57-0a90d686a8fd-tmp-dir\") pod \"dns-operator-799b87ffcd-bf6bf\" (UID: \"8781666d-7431-4ebc-aa57-0a90d686a8fd\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-bf6bf" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.175653 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80801f36-b03c-44af-bbaa-4e9a962f9a30-service-ca-bundle\") pod \"router-default-68cf44c8b8-mbr9b\" (UID: \"80801f36-b03c-44af-bbaa-4e9a962f9a30\") " pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.175686 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3c2a5e43-6a8f-4c5f-95f7-6d5420fbfb1c-signing-key\") pod \"service-ca-74545575db-lqkzh\" (UID: \"3c2a5e43-6a8f-4c5f-95f7-6d5420fbfb1c\") " pod="openshift-service-ca/service-ca-74545575db-lqkzh" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.175701 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c53ce89a-3e31-41ac-96d2-c4326f044986-plugins-dir\") pod \"csi-hostpathplugin-7mhc8\" (UID: \"c53ce89a-3e31-41ac-96d2-c4326f044986\") " pod="hostpath-provisioner/csi-hostpathplugin-7mhc8" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.175724 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/80801f36-b03c-44af-bbaa-4e9a962f9a30-stats-auth\") pod \"router-default-68cf44c8b8-mbr9b\" (UID: \"80801f36-b03c-44af-bbaa-4e9a962f9a30\") " pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.175747 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e39cba7d-bc11-44ab-a079-c2b873d17ef9-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-pwh7s\" (UID: \"e39cba7d-bc11-44ab-a079-c2b873d17ef9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pwh7s" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.175778 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljfn7\" (UniqueName: \"kubernetes.io/projected/3c2a5e43-6a8f-4c5f-95f7-6d5420fbfb1c-kube-api-access-ljfn7\") pod \"service-ca-74545575db-lqkzh\" (UID: \"3c2a5e43-6a8f-4c5f-95f7-6d5420fbfb1c\") " pod="openshift-service-ca/service-ca-74545575db-lqkzh" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.175792 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3b5528d4-4c62-46d7-89d9-3a6de1a8f546-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-ffjjk\" (UID: \"3b5528d4-4c62-46d7-89d9-3a6de1a8f546\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-ffjjk" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.175829 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/27c8cad2-b082-4ca6-b198-1d9817a2e90e-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-wk2r6\" (UID: \"27c8cad2-b082-4ca6-b198-1d9817a2e90e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wk2r6" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.175917 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ae7da3db-5cbd-40ff-adfb-417c0d055042-registry-tls\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.176071 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/27c8cad2-b082-4ca6-b198-1d9817a2e90e-tmpfs\") pod \"catalog-operator-75ff9f647d-wk2r6\" (UID: \"27c8cad2-b082-4ca6-b198-1d9817a2e90e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wk2r6" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.176102 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ff2a0637-a303-4291-9db1-a2edaa44d952-node-bootstrap-token\") pod \"machine-config-server-klq76\" (UID: \"ff2a0637-a303-4291-9db1-a2edaa44d952\") " pod="openshift-machine-config-operator/machine-config-server-klq76" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.176145 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b577373-c7f0-4128-953f-e221abc2d09b-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-g92zh\" (UID: \"6b577373-c7f0-4128-953f-e221abc2d09b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-g92zh" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.176174 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-serving-cert\") pod \"controller-manager-65b6cccf98-xqx9c\" (UID: \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.176373 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-client-ca\") pod \"controller-manager-65b6cccf98-xqx9c\" (UID: \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.176504 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae7da3db-5cbd-40ff-adfb-417c0d055042-trusted-ca\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.176670 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.176724 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfggk\" (UniqueName: \"kubernetes.io/projected/bd500fea-ccff-4a18-98ca-449906eac69c-kube-api-access-hfggk\") pod \"ingress-canary-gfrwv\" (UID: \"bd500fea-ccff-4a18-98ca-449906eac69c\") " pod="openshift-ingress-canary/ingress-canary-gfrwv" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.176748 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c53ce89a-3e31-41ac-96d2-c4326f044986-mountpoint-dir\") pod \"csi-hostpathplugin-7mhc8\" (UID: \"c53ce89a-3e31-41ac-96d2-c4326f044986\") " pod="hostpath-provisioner/csi-hostpathplugin-7mhc8" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.176850 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/568b36ce-cb38-401e-afc3-3c6e518c9c1a-ready\") pod \"cni-sysctl-allowlist-ds-hvxpc\" (UID: \"568b36ce-cb38-401e-afc3-3c6e518c9c1a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.176912 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pffdl\" (UniqueName: \"kubernetes.io/projected/ea19e3ee-138c-4fc9-aa7f-c2c7747b3468-kube-api-access-pffdl\") pod \"control-plane-machine-set-operator-75ffdb6fcd-lckdk\" (UID: \"ea19e3ee-138c-4fc9-aa7f-c2c7747b3468\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-lckdk" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.176966 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d44nt\" (UniqueName: \"kubernetes.io/projected/1ad6f093-e118-435a-9ebd-f7346da27676-kube-api-access-d44nt\") pod \"dns-default-46x2w\" (UID: \"1ad6f093-e118-435a-9ebd-f7346da27676\") " pod="openshift-dns/dns-default-46x2w" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.177001 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x479j\" (UniqueName: \"kubernetes.io/projected/27c8cad2-b082-4ca6-b198-1d9817a2e90e-kube-api-access-x479j\") pod \"catalog-operator-75ff9f647d-wk2r6\" (UID: \"27c8cad2-b082-4ca6-b198-1d9817a2e90e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wk2r6" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.177084 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ad6f093-e118-435a-9ebd-f7346da27676-config-volume\") pod \"dns-default-46x2w\" (UID: \"1ad6f093-e118-435a-9ebd-f7346da27676\") " pod="openshift-dns/dns-default-46x2w" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.177113 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd500fea-ccff-4a18-98ca-449906eac69c-cert\") pod \"ingress-canary-gfrwv\" (UID: \"bd500fea-ccff-4a18-98ca-449906eac69c\") " pod="openshift-ingress-canary/ingress-canary-gfrwv" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.177146 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlbxn\" (UniqueName: \"kubernetes.io/projected/80801f36-b03c-44af-bbaa-4e9a962f9a30-kube-api-access-qlbxn\") pod \"router-default-68cf44c8b8-mbr9b\" (UID: \"80801f36-b03c-44af-bbaa-4e9a962f9a30\") " pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.177208 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39ce6772-6cb0-4cfd-afaa-47f5a73ede25-serving-cert\") pod \"service-ca-operator-5b9c976747-r75qz\" (UID: \"39ce6772-6cb0-4cfd-afaa-47f5a73ede25\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-r75qz" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.177302 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/27c8cad2-b082-4ca6-b198-1d9817a2e90e-srv-cert\") pod \"catalog-operator-75ff9f647d-wk2r6\" (UID: \"27c8cad2-b082-4ca6-b198-1d9817a2e90e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wk2r6" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.177333 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/71a4bfc8-fadc-4b4e-90c6-7e93ee88dbe7-tmpfs\") pod \"packageserver-7d4fc7d867-jxbv4\" (UID: \"71a4bfc8-fadc-4b4e-90c6-7e93ee88dbe7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jxbv4" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.177357 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v24np\" (UniqueName: \"kubernetes.io/projected/568b36ce-cb38-401e-afc3-3c6e518c9c1a-kube-api-access-v24np\") pod \"cni-sysctl-allowlist-ds-hvxpc\" (UID: \"568b36ce-cb38-401e-afc3-3c6e518c9c1a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.177440 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ae7da3db-5cbd-40ff-adfb-417c0d055042-bound-sa-token\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.177473 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbvs2\" (UniqueName: \"kubernetes.io/projected/71a4bfc8-fadc-4b4e-90c6-7e93ee88dbe7-kube-api-access-pbvs2\") pod \"packageserver-7d4fc7d867-jxbv4\" (UID: \"71a4bfc8-fadc-4b4e-90c6-7e93ee88dbe7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jxbv4" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.178426 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/568b36ce-cb38-401e-afc3-3c6e518c9c1a-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-hvxpc\" (UID: \"568b36ce-cb38-401e-afc3-3c6e518c9c1a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.178488 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39ce6772-6cb0-4cfd-afaa-47f5a73ede25-config\") pod \"service-ca-operator-5b9c976747-r75qz\" (UID: \"39ce6772-6cb0-4cfd-afaa-47f5a73ede25\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-r75qz" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.178517 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7w2n\" (UniqueName: \"kubernetes.io/projected/8781666d-7431-4ebc-aa57-0a90d686a8fd-kube-api-access-g7w2n\") pod \"dns-operator-799b87ffcd-bf6bf\" (UID: \"8781666d-7431-4ebc-aa57-0a90d686a8fd\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-bf6bf" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.178537 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b5528d4-4c62-46d7-89d9-3a6de1a8f546-config\") pod \"openshift-kube-scheduler-operator-54f497555d-ffjjk\" (UID: \"3b5528d4-4c62-46d7-89d9-3a6de1a8f546\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-ffjjk" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.178602 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd738239-7d02-47f2-aad8-bb51fbe73201-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-6st9d\" (UID: \"dd738239-7d02-47f2-aad8-bb51fbe73201\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6st9d" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.178624 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b577373-c7f0-4128-953f-e221abc2d09b-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-g92zh\" (UID: \"6b577373-c7f0-4128-953f-e221abc2d09b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-g92zh" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.178647 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1ad6f093-e118-435a-9ebd-f7346da27676-metrics-tls\") pod \"dns-default-46x2w\" (UID: \"1ad6f093-e118-435a-9ebd-f7346da27676\") " pod="openshift-dns/dns-default-46x2w" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.178666 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71a4bfc8-fadc-4b4e-90c6-7e93ee88dbe7-webhook-cert\") pod \"packageserver-7d4fc7d867-jxbv4\" (UID: \"71a4bfc8-fadc-4b4e-90c6-7e93ee88dbe7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jxbv4" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.178687 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e75356d-8170-4619-9539-ea5e50c2b892-config-volume\") pod \"collect-profiles-29489760-g5ptf\" (UID: \"8e75356d-8170-4619-9539-ea5e50c2b892\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-g5ptf" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.178707 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3c2a5e43-6a8f-4c5f-95f7-6d5420fbfb1c-signing-cabundle\") pod \"service-ca-74545575db-lqkzh\" (UID: \"3c2a5e43-6a8f-4c5f-95f7-6d5420fbfb1c\") " pod="openshift-service-ca/service-ca-74545575db-lqkzh" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.178729 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9zp5\" (UniqueName: \"kubernetes.io/projected/6b577373-c7f0-4128-953f-e221abc2d09b-kube-api-access-q9zp5\") pod \"machine-config-controller-f9cdd68f7-g92zh\" (UID: \"6b577373-c7f0-4128-953f-e221abc2d09b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-g92zh" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.178769 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pd74\" (UniqueName: \"kubernetes.io/projected/ff2a0637-a303-4291-9db1-a2edaa44d952-kube-api-access-4pd74\") pod \"machine-config-server-klq76\" (UID: \"ff2a0637-a303-4291-9db1-a2edaa44d952\") " pod="openshift-machine-config-operator/machine-config-server-klq76" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.178793 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9jg6\" (UniqueName: \"kubernetes.io/projected/b2d6954a-8d0a-453a-9f0f-0051f612d78b-kube-api-access-f9jg6\") pod \"multus-admission-controller-69db94689b-9vz6c\" (UID: \"b2d6954a-8d0a-453a-9f0f-0051f612d78b\") " pod="openshift-multus/multus-admission-controller-69db94689b-9vz6c" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.178817 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/80801f36-b03c-44af-bbaa-4e9a962f9a30-default-certificate\") pod \"router-default-68cf44c8b8-mbr9b\" (UID: \"80801f36-b03c-44af-bbaa-4e9a962f9a30\") " pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.178900 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/568b36ce-cb38-401e-afc3-3c6e518c9c1a-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-hvxpc\" (UID: \"568b36ce-cb38-401e-afc3-3c6e518c9c1a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.178929 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b2d6954a-8d0a-453a-9f0f-0051f612d78b-webhook-certs\") pod \"multus-admission-controller-69db94689b-9vz6c\" (UID: \"b2d6954a-8d0a-453a-9f0f-0051f612d78b\") " pod="openshift-multus/multus-admission-controller-69db94689b-9vz6c" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.178982 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8fdz\" (UniqueName: \"kubernetes.io/projected/ae7da3db-5cbd-40ff-adfb-417c0d055042-kube-api-access-l8fdz\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.179024 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8781666d-7431-4ebc-aa57-0a90d686a8fd-metrics-tls\") pod \"dns-operator-799b87ffcd-bf6bf\" (UID: \"8781666d-7431-4ebc-aa57-0a90d686a8fd\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-bf6bf" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.179044 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c53ce89a-3e31-41ac-96d2-c4326f044986-socket-dir\") pod \"csi-hostpathplugin-7mhc8\" (UID: \"c53ce89a-3e31-41ac-96d2-c4326f044986\") " pod="hostpath-provisioner/csi-hostpathplugin-7mhc8" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.179072 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs4z9\" (UniqueName: \"kubernetes.io/projected/55c6d87d-ae3b-4818-b6ea-d00e1a453c20-kube-api-access-rs4z9\") pod \"migrator-866fcbc849-7fc7h\" (UID: \"55c6d87d-ae3b-4818-b6ea-d00e1a453c20\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-7fc7h" Jan 26 00:11:45 crc kubenswrapper[5107]: E0126 00:11:45.180515 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:45.6805012 +0000 UTC m=+150.598095546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.182385 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ae7da3db-5cbd-40ff-adfb-417c0d055042-ca-trust-extracted\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.182413 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c53ce89a-3e31-41ac-96d2-c4326f044986-registration-dir\") pod \"csi-hostpathplugin-7mhc8\" (UID: \"c53ce89a-3e31-41ac-96d2-c4326f044986\") " pod="hostpath-provisioner/csi-hostpathplugin-7mhc8" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.283178 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.283435 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b577373-c7f0-4128-953f-e221abc2d09b-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-g92zh\" (UID: \"6b577373-c7f0-4128-953f-e221abc2d09b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-g92zh" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.283467 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1ad6f093-e118-435a-9ebd-f7346da27676-metrics-tls\") pod \"dns-default-46x2w\" (UID: \"1ad6f093-e118-435a-9ebd-f7346da27676\") " pod="openshift-dns/dns-default-46x2w" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.283490 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71a4bfc8-fadc-4b4e-90c6-7e93ee88dbe7-webhook-cert\") pod \"packageserver-7d4fc7d867-jxbv4\" (UID: \"71a4bfc8-fadc-4b4e-90c6-7e93ee88dbe7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jxbv4" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.283505 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e75356d-8170-4619-9539-ea5e50c2b892-config-volume\") pod \"collect-profiles-29489760-g5ptf\" (UID: \"8e75356d-8170-4619-9539-ea5e50c2b892\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-g5ptf" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.283522 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3c2a5e43-6a8f-4c5f-95f7-6d5420fbfb1c-signing-cabundle\") pod \"service-ca-74545575db-lqkzh\" (UID: \"3c2a5e43-6a8f-4c5f-95f7-6d5420fbfb1c\") " pod="openshift-service-ca/service-ca-74545575db-lqkzh" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.283537 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q9zp5\" (UniqueName: \"kubernetes.io/projected/6b577373-c7f0-4128-953f-e221abc2d09b-kube-api-access-q9zp5\") pod \"machine-config-controller-f9cdd68f7-g92zh\" (UID: \"6b577373-c7f0-4128-953f-e221abc2d09b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-g92zh" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.283558 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4pd74\" (UniqueName: \"kubernetes.io/projected/ff2a0637-a303-4291-9db1-a2edaa44d952-kube-api-access-4pd74\") pod \"machine-config-server-klq76\" (UID: \"ff2a0637-a303-4291-9db1-a2edaa44d952\") " pod="openshift-machine-config-operator/machine-config-server-klq76" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.283574 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f9jg6\" (UniqueName: \"kubernetes.io/projected/b2d6954a-8d0a-453a-9f0f-0051f612d78b-kube-api-access-f9jg6\") pod \"multus-admission-controller-69db94689b-9vz6c\" (UID: \"b2d6954a-8d0a-453a-9f0f-0051f612d78b\") " pod="openshift-multus/multus-admission-controller-69db94689b-9vz6c" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.283593 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/80801f36-b03c-44af-bbaa-4e9a962f9a30-default-certificate\") pod \"router-default-68cf44c8b8-mbr9b\" (UID: \"80801f36-b03c-44af-bbaa-4e9a962f9a30\") " pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.283608 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/568b36ce-cb38-401e-afc3-3c6e518c9c1a-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-hvxpc\" (UID: \"568b36ce-cb38-401e-afc3-3c6e518c9c1a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.283624 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b2d6954a-8d0a-453a-9f0f-0051f612d78b-webhook-certs\") pod \"multus-admission-controller-69db94689b-9vz6c\" (UID: \"b2d6954a-8d0a-453a-9f0f-0051f612d78b\") " pod="openshift-multus/multus-admission-controller-69db94689b-9vz6c" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.283652 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l8fdz\" (UniqueName: \"kubernetes.io/projected/ae7da3db-5cbd-40ff-adfb-417c0d055042-kube-api-access-l8fdz\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.283670 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8781666d-7431-4ebc-aa57-0a90d686a8fd-metrics-tls\") pod \"dns-operator-799b87ffcd-bf6bf\" (UID: \"8781666d-7431-4ebc-aa57-0a90d686a8fd\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-bf6bf" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.283688 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c53ce89a-3e31-41ac-96d2-c4326f044986-socket-dir\") pod \"csi-hostpathplugin-7mhc8\" (UID: \"c53ce89a-3e31-41ac-96d2-c4326f044986\") " pod="hostpath-provisioner/csi-hostpathplugin-7mhc8" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.283717 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rs4z9\" (UniqueName: \"kubernetes.io/projected/55c6d87d-ae3b-4818-b6ea-d00e1a453c20-kube-api-access-rs4z9\") pod \"migrator-866fcbc849-7fc7h\" (UID: \"55c6d87d-ae3b-4818-b6ea-d00e1a453c20\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-7fc7h" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.283764 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ae7da3db-5cbd-40ff-adfb-417c0d055042-ca-trust-extracted\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.283781 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c53ce89a-3e31-41ac-96d2-c4326f044986-registration-dir\") pod \"csi-hostpathplugin-7mhc8\" (UID: \"c53ce89a-3e31-41ac-96d2-c4326f044986\") " pod="hostpath-provisioner/csi-hostpathplugin-7mhc8" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.283819 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-tmp\") pod \"controller-manager-65b6cccf98-xqx9c\" (UID: \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.283847 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k944z\" (UniqueName: \"kubernetes.io/projected/e39cba7d-bc11-44ab-a079-c2b873d17ef9-kube-api-access-k944z\") pod \"package-server-manager-77f986bd66-pwh7s\" (UID: \"e39cba7d-bc11-44ab-a079-c2b873d17ef9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pwh7s" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.283873 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e75356d-8170-4619-9539-ea5e50c2b892-secret-volume\") pod \"collect-profiles-29489760-g5ptf\" (UID: \"8e75356d-8170-4619-9539-ea5e50c2b892\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-g5ptf" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.283937 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4gwql\" (UniqueName: \"kubernetes.io/projected/39ce6772-6cb0-4cfd-afaa-47f5a73ede25-kube-api-access-4gwql\") pod \"service-ca-operator-5b9c976747-r75qz\" (UID: \"39ce6772-6cb0-4cfd-afaa-47f5a73ede25\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-r75qz" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.283965 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-config\") pod \"controller-manager-65b6cccf98-xqx9c\" (UID: \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.283985 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hqg9b\" (UniqueName: \"kubernetes.io/projected/8e75356d-8170-4619-9539-ea5e50c2b892-kube-api-access-hqg9b\") pod \"collect-profiles-29489760-g5ptf\" (UID: \"8e75356d-8170-4619-9539-ea5e50c2b892\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-g5ptf" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.283999 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1ad6f093-e118-435a-9ebd-f7346da27676-tmp-dir\") pod \"dns-default-46x2w\" (UID: \"1ad6f093-e118-435a-9ebd-f7346da27676\") " pod="openshift-dns/dns-default-46x2w" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284017 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ae7da3db-5cbd-40ff-adfb-417c0d055042-installation-pull-secrets\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284035 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80801f36-b03c-44af-bbaa-4e9a962f9a30-metrics-certs\") pod \"router-default-68cf44c8b8-mbr9b\" (UID: \"80801f36-b03c-44af-bbaa-4e9a962f9a30\") " pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284053 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea19e3ee-138c-4fc9-aa7f-c2c7747b3468-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-lckdk\" (UID: \"ea19e3ee-138c-4fc9-aa7f-c2c7747b3468\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-lckdk" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284072 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ae7da3db-5cbd-40ff-adfb-417c0d055042-registry-certificates\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284087 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7lj8h\" (UniqueName: \"kubernetes.io/projected/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-kube-api-access-7lj8h\") pod \"controller-manager-65b6cccf98-xqx9c\" (UID: \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284103 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ff2a0637-a303-4291-9db1-a2edaa44d952-certs\") pod \"machine-config-server-klq76\" (UID: \"ff2a0637-a303-4291-9db1-a2edaa44d952\") " pod="openshift-machine-config-operator/machine-config-server-klq76" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284121 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b5528d4-4c62-46d7-89d9-3a6de1a8f546-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-ffjjk\" (UID: \"3b5528d4-4c62-46d7-89d9-3a6de1a8f546\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-ffjjk" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284140 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-xqx9c\" (UID: \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284178 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mhrpd\" (UniqueName: \"kubernetes.io/projected/dd738239-7d02-47f2-aad8-bb51fbe73201-kube-api-access-mhrpd\") pod \"cluster-samples-operator-6b564684c8-6st9d\" (UID: \"dd738239-7d02-47f2-aad8-bb51fbe73201\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6st9d" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284194 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qkfqr\" (UniqueName: \"kubernetes.io/projected/c53ce89a-3e31-41ac-96d2-c4326f044986-kube-api-access-qkfqr\") pod \"csi-hostpathplugin-7mhc8\" (UID: \"c53ce89a-3e31-41ac-96d2-c4326f044986\") " pod="hostpath-provisioner/csi-hostpathplugin-7mhc8" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284213 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71a4bfc8-fadc-4b4e-90c6-7e93ee88dbe7-apiservice-cert\") pod \"packageserver-7d4fc7d867-jxbv4\" (UID: \"71a4bfc8-fadc-4b4e-90c6-7e93ee88dbe7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jxbv4" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284227 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b5528d4-4c62-46d7-89d9-3a6de1a8f546-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-ffjjk\" (UID: \"3b5528d4-4c62-46d7-89d9-3a6de1a8f546\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-ffjjk" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284242 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c53ce89a-3e31-41ac-96d2-c4326f044986-csi-data-dir\") pod \"csi-hostpathplugin-7mhc8\" (UID: \"c53ce89a-3e31-41ac-96d2-c4326f044986\") " pod="hostpath-provisioner/csi-hostpathplugin-7mhc8" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284258 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8781666d-7431-4ebc-aa57-0a90d686a8fd-tmp-dir\") pod \"dns-operator-799b87ffcd-bf6bf\" (UID: \"8781666d-7431-4ebc-aa57-0a90d686a8fd\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-bf6bf" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284282 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80801f36-b03c-44af-bbaa-4e9a962f9a30-service-ca-bundle\") pod \"router-default-68cf44c8b8-mbr9b\" (UID: \"80801f36-b03c-44af-bbaa-4e9a962f9a30\") " pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284297 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3c2a5e43-6a8f-4c5f-95f7-6d5420fbfb1c-signing-key\") pod \"service-ca-74545575db-lqkzh\" (UID: \"3c2a5e43-6a8f-4c5f-95f7-6d5420fbfb1c\") " pod="openshift-service-ca/service-ca-74545575db-lqkzh" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284313 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c53ce89a-3e31-41ac-96d2-c4326f044986-plugins-dir\") pod \"csi-hostpathplugin-7mhc8\" (UID: \"c53ce89a-3e31-41ac-96d2-c4326f044986\") " pod="hostpath-provisioner/csi-hostpathplugin-7mhc8" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284328 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/80801f36-b03c-44af-bbaa-4e9a962f9a30-stats-auth\") pod \"router-default-68cf44c8b8-mbr9b\" (UID: \"80801f36-b03c-44af-bbaa-4e9a962f9a30\") " pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284342 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e39cba7d-bc11-44ab-a079-c2b873d17ef9-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-pwh7s\" (UID: \"e39cba7d-bc11-44ab-a079-c2b873d17ef9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pwh7s" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284358 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ljfn7\" (UniqueName: \"kubernetes.io/projected/3c2a5e43-6a8f-4c5f-95f7-6d5420fbfb1c-kube-api-access-ljfn7\") pod \"service-ca-74545575db-lqkzh\" (UID: \"3c2a5e43-6a8f-4c5f-95f7-6d5420fbfb1c\") " pod="openshift-service-ca/service-ca-74545575db-lqkzh" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284372 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3b5528d4-4c62-46d7-89d9-3a6de1a8f546-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-ffjjk\" (UID: \"3b5528d4-4c62-46d7-89d9-3a6de1a8f546\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-ffjjk" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284388 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/27c8cad2-b082-4ca6-b198-1d9817a2e90e-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-wk2r6\" (UID: \"27c8cad2-b082-4ca6-b198-1d9817a2e90e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wk2r6" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284407 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ae7da3db-5cbd-40ff-adfb-417c0d055042-registry-tls\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284427 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/27c8cad2-b082-4ca6-b198-1d9817a2e90e-tmpfs\") pod \"catalog-operator-75ff9f647d-wk2r6\" (UID: \"27c8cad2-b082-4ca6-b198-1d9817a2e90e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wk2r6" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284456 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ff2a0637-a303-4291-9db1-a2edaa44d952-node-bootstrap-token\") pod \"machine-config-server-klq76\" (UID: \"ff2a0637-a303-4291-9db1-a2edaa44d952\") " pod="openshift-machine-config-operator/machine-config-server-klq76" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284492 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b577373-c7f0-4128-953f-e221abc2d09b-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-g92zh\" (UID: \"6b577373-c7f0-4128-953f-e221abc2d09b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-g92zh" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284518 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-serving-cert\") pod \"controller-manager-65b6cccf98-xqx9c\" (UID: \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284544 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-client-ca\") pod \"controller-manager-65b6cccf98-xqx9c\" (UID: \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284571 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae7da3db-5cbd-40ff-adfb-417c0d055042-trusted-ca\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284604 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hfggk\" (UniqueName: \"kubernetes.io/projected/bd500fea-ccff-4a18-98ca-449906eac69c-kube-api-access-hfggk\") pod \"ingress-canary-gfrwv\" (UID: \"bd500fea-ccff-4a18-98ca-449906eac69c\") " pod="openshift-ingress-canary/ingress-canary-gfrwv" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284628 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c53ce89a-3e31-41ac-96d2-c4326f044986-mountpoint-dir\") pod \"csi-hostpathplugin-7mhc8\" (UID: \"c53ce89a-3e31-41ac-96d2-c4326f044986\") " pod="hostpath-provisioner/csi-hostpathplugin-7mhc8" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284653 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/568b36ce-cb38-401e-afc3-3c6e518c9c1a-ready\") pod \"cni-sysctl-allowlist-ds-hvxpc\" (UID: \"568b36ce-cb38-401e-afc3-3c6e518c9c1a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284672 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pffdl\" (UniqueName: \"kubernetes.io/projected/ea19e3ee-138c-4fc9-aa7f-c2c7747b3468-kube-api-access-pffdl\") pod \"control-plane-machine-set-operator-75ffdb6fcd-lckdk\" (UID: \"ea19e3ee-138c-4fc9-aa7f-c2c7747b3468\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-lckdk" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284697 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d44nt\" (UniqueName: \"kubernetes.io/projected/1ad6f093-e118-435a-9ebd-f7346da27676-kube-api-access-d44nt\") pod \"dns-default-46x2w\" (UID: \"1ad6f093-e118-435a-9ebd-f7346da27676\") " pod="openshift-dns/dns-default-46x2w" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284719 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x479j\" (UniqueName: \"kubernetes.io/projected/27c8cad2-b082-4ca6-b198-1d9817a2e90e-kube-api-access-x479j\") pod \"catalog-operator-75ff9f647d-wk2r6\" (UID: \"27c8cad2-b082-4ca6-b198-1d9817a2e90e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wk2r6" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284756 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ad6f093-e118-435a-9ebd-f7346da27676-config-volume\") pod \"dns-default-46x2w\" (UID: \"1ad6f093-e118-435a-9ebd-f7346da27676\") " pod="openshift-dns/dns-default-46x2w" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284776 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd500fea-ccff-4a18-98ca-449906eac69c-cert\") pod \"ingress-canary-gfrwv\" (UID: \"bd500fea-ccff-4a18-98ca-449906eac69c\") " pod="openshift-ingress-canary/ingress-canary-gfrwv" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284804 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qlbxn\" (UniqueName: \"kubernetes.io/projected/80801f36-b03c-44af-bbaa-4e9a962f9a30-kube-api-access-qlbxn\") pod \"router-default-68cf44c8b8-mbr9b\" (UID: \"80801f36-b03c-44af-bbaa-4e9a962f9a30\") " pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284852 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39ce6772-6cb0-4cfd-afaa-47f5a73ede25-serving-cert\") pod \"service-ca-operator-5b9c976747-r75qz\" (UID: \"39ce6772-6cb0-4cfd-afaa-47f5a73ede25\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-r75qz" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284914 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/27c8cad2-b082-4ca6-b198-1d9817a2e90e-srv-cert\") pod \"catalog-operator-75ff9f647d-wk2r6\" (UID: \"27c8cad2-b082-4ca6-b198-1d9817a2e90e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wk2r6" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284940 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/71a4bfc8-fadc-4b4e-90c6-7e93ee88dbe7-tmpfs\") pod \"packageserver-7d4fc7d867-jxbv4\" (UID: \"71a4bfc8-fadc-4b4e-90c6-7e93ee88dbe7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jxbv4" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.284988 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v24np\" (UniqueName: \"kubernetes.io/projected/568b36ce-cb38-401e-afc3-3c6e518c9c1a-kube-api-access-v24np\") pod \"cni-sysctl-allowlist-ds-hvxpc\" (UID: \"568b36ce-cb38-401e-afc3-3c6e518c9c1a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.285024 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ae7da3db-5cbd-40ff-adfb-417c0d055042-bound-sa-token\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.285052 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pbvs2\" (UniqueName: \"kubernetes.io/projected/71a4bfc8-fadc-4b4e-90c6-7e93ee88dbe7-kube-api-access-pbvs2\") pod \"packageserver-7d4fc7d867-jxbv4\" (UID: \"71a4bfc8-fadc-4b4e-90c6-7e93ee88dbe7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jxbv4" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.285078 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/568b36ce-cb38-401e-afc3-3c6e518c9c1a-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-hvxpc\" (UID: \"568b36ce-cb38-401e-afc3-3c6e518c9c1a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.285117 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39ce6772-6cb0-4cfd-afaa-47f5a73ede25-config\") pod \"service-ca-operator-5b9c976747-r75qz\" (UID: \"39ce6772-6cb0-4cfd-afaa-47f5a73ede25\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-r75qz" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.285150 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g7w2n\" (UniqueName: \"kubernetes.io/projected/8781666d-7431-4ebc-aa57-0a90d686a8fd-kube-api-access-g7w2n\") pod \"dns-operator-799b87ffcd-bf6bf\" (UID: \"8781666d-7431-4ebc-aa57-0a90d686a8fd\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-bf6bf" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.285175 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b5528d4-4c62-46d7-89d9-3a6de1a8f546-config\") pod \"openshift-kube-scheduler-operator-54f497555d-ffjjk\" (UID: \"3b5528d4-4c62-46d7-89d9-3a6de1a8f546\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-ffjjk" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.285220 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd738239-7d02-47f2-aad8-bb51fbe73201-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-6st9d\" (UID: \"dd738239-7d02-47f2-aad8-bb51fbe73201\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6st9d" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.286160 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e75356d-8170-4619-9539-ea5e50c2b892-config-volume\") pod \"collect-profiles-29489760-g5ptf\" (UID: \"8e75356d-8170-4619-9539-ea5e50c2b892\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-g5ptf" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.286822 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3c2a5e43-6a8f-4c5f-95f7-6d5420fbfb1c-signing-cabundle\") pod \"service-ca-74545575db-lqkzh\" (UID: \"3c2a5e43-6a8f-4c5f-95f7-6d5420fbfb1c\") " pod="openshift-service-ca/service-ca-74545575db-lqkzh" Jan 26 00:11:45 crc kubenswrapper[5107]: E0126 00:11:45.288030 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:45.78799788 +0000 UTC m=+150.705592246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.288797 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/568b36ce-cb38-401e-afc3-3c6e518c9c1a-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-hvxpc\" (UID: \"568b36ce-cb38-401e-afc3-3c6e518c9c1a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.290239 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ae7da3db-5cbd-40ff-adfb-417c0d055042-ca-trust-extracted\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.290492 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c53ce89a-3e31-41ac-96d2-c4326f044986-registration-dir\") pod \"csi-hostpathplugin-7mhc8\" (UID: \"c53ce89a-3e31-41ac-96d2-c4326f044986\") " pod="hostpath-provisioner/csi-hostpathplugin-7mhc8" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.291081 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1ad6f093-e118-435a-9ebd-f7346da27676-tmp-dir\") pod \"dns-default-46x2w\" (UID: \"1ad6f093-e118-435a-9ebd-f7346da27676\") " pod="openshift-dns/dns-default-46x2w" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.297311 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c53ce89a-3e31-41ac-96d2-c4326f044986-socket-dir\") pod \"csi-hostpathplugin-7mhc8\" (UID: \"c53ce89a-3e31-41ac-96d2-c4326f044986\") " pod="hostpath-provisioner/csi-hostpathplugin-7mhc8" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.298695 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-tmp\") pod \"controller-manager-65b6cccf98-xqx9c\" (UID: \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.298985 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8781666d-7431-4ebc-aa57-0a90d686a8fd-metrics-tls\") pod \"dns-operator-799b87ffcd-bf6bf\" (UID: \"8781666d-7431-4ebc-aa57-0a90d686a8fd\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-bf6bf" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.299990 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ae7da3db-5cbd-40ff-adfb-417c0d055042-registry-certificates\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.300870 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3b5528d4-4c62-46d7-89d9-3a6de1a8f546-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-ffjjk\" (UID: \"3b5528d4-4c62-46d7-89d9-3a6de1a8f546\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-ffjjk" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.300992 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c53ce89a-3e31-41ac-96d2-c4326f044986-plugins-dir\") pod \"csi-hostpathplugin-7mhc8\" (UID: \"c53ce89a-3e31-41ac-96d2-c4326f044986\") " pod="hostpath-provisioner/csi-hostpathplugin-7mhc8" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.302301 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1ad6f093-e118-435a-9ebd-f7346da27676-metrics-tls\") pod \"dns-default-46x2w\" (UID: \"1ad6f093-e118-435a-9ebd-f7346da27676\") " pod="openshift-dns/dns-default-46x2w" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.304194 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-client-ca\") pod \"controller-manager-65b6cccf98-xqx9c\" (UID: \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.305651 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae7da3db-5cbd-40ff-adfb-417c0d055042-trusted-ca\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.305860 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c53ce89a-3e31-41ac-96d2-c4326f044986-mountpoint-dir\") pod \"csi-hostpathplugin-7mhc8\" (UID: \"c53ce89a-3e31-41ac-96d2-c4326f044986\") " pod="hostpath-provisioner/csi-hostpathplugin-7mhc8" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.306007 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-config\") pod \"controller-manager-65b6cccf98-xqx9c\" (UID: \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.306100 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/568b36ce-cb38-401e-afc3-3c6e518c9c1a-ready\") pod \"cni-sysctl-allowlist-ds-hvxpc\" (UID: \"568b36ce-cb38-401e-afc3-3c6e518c9c1a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.306709 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ad6f093-e118-435a-9ebd-f7346da27676-config-volume\") pod \"dns-default-46x2w\" (UID: \"1ad6f093-e118-435a-9ebd-f7346da27676\") " pod="openshift-dns/dns-default-46x2w" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.306774 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd738239-7d02-47f2-aad8-bb51fbe73201-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-6st9d\" (UID: \"dd738239-7d02-47f2-aad8-bb51fbe73201\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6st9d" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.307024 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39ce6772-6cb0-4cfd-afaa-47f5a73ede25-serving-cert\") pod \"service-ca-operator-5b9c976747-r75qz\" (UID: \"39ce6772-6cb0-4cfd-afaa-47f5a73ede25\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-r75qz" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.307358 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71a4bfc8-fadc-4b4e-90c6-7e93ee88dbe7-webhook-cert\") pod \"packageserver-7d4fc7d867-jxbv4\" (UID: \"71a4bfc8-fadc-4b4e-90c6-7e93ee88dbe7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jxbv4" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.308395 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3c2a5e43-6a8f-4c5f-95f7-6d5420fbfb1c-signing-key\") pod \"service-ca-74545575db-lqkzh\" (UID: \"3c2a5e43-6a8f-4c5f-95f7-6d5420fbfb1c\") " pod="openshift-service-ca/service-ca-74545575db-lqkzh" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.308807 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ae7da3db-5cbd-40ff-adfb-417c0d055042-installation-pull-secrets\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.312834 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b5528d4-4c62-46d7-89d9-3a6de1a8f546-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-ffjjk\" (UID: \"3b5528d4-4c62-46d7-89d9-3a6de1a8f546\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-ffjjk" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.313396 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea19e3ee-138c-4fc9-aa7f-c2c7747b3468-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-lckdk\" (UID: \"ea19e3ee-138c-4fc9-aa7f-c2c7747b3468\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-lckdk" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.313822 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/71a4bfc8-fadc-4b4e-90c6-7e93ee88dbe7-tmpfs\") pod \"packageserver-7d4fc7d867-jxbv4\" (UID: \"71a4bfc8-fadc-4b4e-90c6-7e93ee88dbe7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jxbv4" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.314131 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/27c8cad2-b082-4ca6-b198-1d9817a2e90e-tmpfs\") pod \"catalog-operator-75ff9f647d-wk2r6\" (UID: \"27c8cad2-b082-4ca6-b198-1d9817a2e90e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wk2r6" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.314648 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-xqx9c\" (UID: \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.320676 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/80801f36-b03c-44af-bbaa-4e9a962f9a30-default-certificate\") pod \"router-default-68cf44c8b8-mbr9b\" (UID: \"80801f36-b03c-44af-bbaa-4e9a962f9a30\") " pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.321548 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80801f36-b03c-44af-bbaa-4e9a962f9a30-metrics-certs\") pod \"router-default-68cf44c8b8-mbr9b\" (UID: \"80801f36-b03c-44af-bbaa-4e9a962f9a30\") " pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.322254 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c53ce89a-3e31-41ac-96d2-c4326f044986-csi-data-dir\") pod \"csi-hostpathplugin-7mhc8\" (UID: \"c53ce89a-3e31-41ac-96d2-c4326f044986\") " pod="hostpath-provisioner/csi-hostpathplugin-7mhc8" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.325306 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/80801f36-b03c-44af-bbaa-4e9a962f9a30-stats-auth\") pod \"router-default-68cf44c8b8-mbr9b\" (UID: \"80801f36-b03c-44af-bbaa-4e9a962f9a30\") " pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.326079 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b577373-c7f0-4128-953f-e221abc2d09b-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-g92zh\" (UID: \"6b577373-c7f0-4128-953f-e221abc2d09b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-g92zh" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.326120 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-gg5st"] Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.326537 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39ce6772-6cb0-4cfd-afaa-47f5a73ede25-config\") pod \"service-ca-operator-5b9c976747-r75qz\" (UID: \"39ce6772-6cb0-4cfd-afaa-47f5a73ede25\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-r75qz" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.326599 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/568b36ce-cb38-401e-afc3-3c6e518c9c1a-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-hvxpc\" (UID: \"568b36ce-cb38-401e-afc3-3c6e518c9c1a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.327004 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-serving-cert\") pod \"controller-manager-65b6cccf98-xqx9c\" (UID: \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.327226 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8781666d-7431-4ebc-aa57-0a90d686a8fd-tmp-dir\") pod \"dns-operator-799b87ffcd-bf6bf\" (UID: \"8781666d-7431-4ebc-aa57-0a90d686a8fd\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-bf6bf" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.328018 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd500fea-ccff-4a18-98ca-449906eac69c-cert\") pod \"ingress-canary-gfrwv\" (UID: \"bd500fea-ccff-4a18-98ca-449906eac69c\") " pod="openshift-ingress-canary/ingress-canary-gfrwv" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.328038 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b5528d4-4c62-46d7-89d9-3a6de1a8f546-config\") pod \"openshift-kube-scheduler-operator-54f497555d-ffjjk\" (UID: \"3b5528d4-4c62-46d7-89d9-3a6de1a8f546\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-ffjjk" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.328346 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80801f36-b03c-44af-bbaa-4e9a962f9a30-service-ca-bundle\") pod \"router-default-68cf44c8b8-mbr9b\" (UID: \"80801f36-b03c-44af-bbaa-4e9a962f9a30\") " pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.330277 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b2d6954a-8d0a-453a-9f0f-0051f612d78b-webhook-certs\") pod \"multus-admission-controller-69db94689b-9vz6c\" (UID: \"b2d6954a-8d0a-453a-9f0f-0051f612d78b\") " pod="openshift-multus/multus-admission-controller-69db94689b-9vz6c" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.332409 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ae7da3db-5cbd-40ff-adfb-417c0d055042-registry-tls\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.332497 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e75356d-8170-4619-9539-ea5e50c2b892-secret-volume\") pod \"collect-profiles-29489760-g5ptf\" (UID: \"8e75356d-8170-4619-9539-ea5e50c2b892\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-g5ptf" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.332500 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71a4bfc8-fadc-4b4e-90c6-7e93ee88dbe7-apiservice-cert\") pod \"packageserver-7d4fc7d867-jxbv4\" (UID: \"71a4bfc8-fadc-4b4e-90c6-7e93ee88dbe7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jxbv4" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.332810 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ff2a0637-a303-4291-9db1-a2edaa44d952-node-bootstrap-token\") pod \"machine-config-server-klq76\" (UID: \"ff2a0637-a303-4291-9db1-a2edaa44d952\") " pod="openshift-machine-config-operator/machine-config-server-klq76" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.332924 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ff2a0637-a303-4291-9db1-a2edaa44d952-certs\") pod \"machine-config-server-klq76\" (UID: \"ff2a0637-a303-4291-9db1-a2edaa44d952\") " pod="openshift-machine-config-operator/machine-config-server-klq76" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.334126 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e39cba7d-bc11-44ab-a079-c2b873d17ef9-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-pwh7s\" (UID: \"e39cba7d-bc11-44ab-a079-c2b873d17ef9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pwh7s" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.334271 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/27c8cad2-b082-4ca6-b198-1d9817a2e90e-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-wk2r6\" (UID: \"27c8cad2-b082-4ca6-b198-1d9817a2e90e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wk2r6" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.336001 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/27c8cad2-b082-4ca6-b198-1d9817a2e90e-srv-cert\") pod \"catalog-operator-75ff9f647d-wk2r6\" (UID: \"27c8cad2-b082-4ca6-b198-1d9817a2e90e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wk2r6" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.345111 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9zp5\" (UniqueName: \"kubernetes.io/projected/6b577373-c7f0-4128-953f-e221abc2d09b-kube-api-access-q9zp5\") pod \"machine-config-controller-f9cdd68f7-g92zh\" (UID: \"6b577373-c7f0-4128-953f-e221abc2d09b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-g92zh" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.354228 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs4z9\" (UniqueName: \"kubernetes.io/projected/55c6d87d-ae3b-4818-b6ea-d00e1a453c20-kube-api-access-rs4z9\") pod \"migrator-866fcbc849-7fc7h\" (UID: \"55c6d87d-ae3b-4818-b6ea-d00e1a453c20\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-7fc7h" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.373725 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pd74\" (UniqueName: \"kubernetes.io/projected/ff2a0637-a303-4291-9db1-a2edaa44d952-kube-api-access-4pd74\") pod \"machine-config-server-klq76\" (UID: \"ff2a0637-a303-4291-9db1-a2edaa44d952\") " pod="openshift-machine-config-operator/machine-config-server-klq76" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.387078 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:45 crc kubenswrapper[5107]: E0126 00:11:45.387489 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:45.887473856 +0000 UTC m=+150.805068202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.390922 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9jg6\" (UniqueName: \"kubernetes.io/projected/b2d6954a-8d0a-453a-9f0f-0051f612d78b-kube-api-access-f9jg6\") pod \"multus-admission-controller-69db94689b-9vz6c\" (UID: \"b2d6954a-8d0a-453a-9f0f-0051f612d78b\") " pod="openshift-multus/multus-admission-controller-69db94689b-9vz6c" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.407866 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b577373-c7f0-4128-953f-e221abc2d09b-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-g92zh\" (UID: \"6b577373-c7f0-4128-953f-e221abc2d09b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-g92zh" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.410173 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8fdz\" (UniqueName: \"kubernetes.io/projected/ae7da3db-5cbd-40ff-adfb-417c0d055042-kube-api-access-l8fdz\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.459933 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gwql\" (UniqueName: \"kubernetes.io/projected/39ce6772-6cb0-4cfd-afaa-47f5a73ede25-kube-api-access-4gwql\") pod \"service-ca-operator-5b9c976747-r75qz\" (UID: \"39ce6772-6cb0-4cfd-afaa-47f5a73ede25\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-r75qz" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.461135 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k944z\" (UniqueName: \"kubernetes.io/projected/e39cba7d-bc11-44ab-a079-c2b873d17ef9-kube-api-access-k944z\") pod \"package-server-manager-77f986bd66-pwh7s\" (UID: \"e39cba7d-bc11-44ab-a079-c2b873d17ef9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pwh7s" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.477949 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqg9b\" (UniqueName: \"kubernetes.io/projected/8e75356d-8170-4619-9539-ea5e50c2b892-kube-api-access-hqg9b\") pod \"collect-profiles-29489760-g5ptf\" (UID: \"8e75356d-8170-4619-9539-ea5e50c2b892\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-g5ptf" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.489718 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:45 crc kubenswrapper[5107]: E0126 00:11:45.490218 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:45.990198482 +0000 UTC m=+150.907792838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.495194 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lj8h\" (UniqueName: \"kubernetes.io/projected/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-kube-api-access-7lj8h\") pod \"controller-manager-65b6cccf98-xqx9c\" (UID: \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.506396 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-g92zh" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.511971 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-7fc7h" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.519058 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-9vz6c" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.521196 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlbxn\" (UniqueName: \"kubernetes.io/projected/80801f36-b03c-44af-bbaa-4e9a962f9a30-kube-api-access-qlbxn\") pod \"router-default-68cf44c8b8-mbr9b\" (UID: \"80801f36-b03c-44af-bbaa-4e9a962f9a30\") " pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.540854 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfggk\" (UniqueName: \"kubernetes.io/projected/bd500fea-ccff-4a18-98ca-449906eac69c-kube-api-access-hfggk\") pod \"ingress-canary-gfrwv\" (UID: \"bd500fea-ccff-4a18-98ca-449906eac69c\") " pod="openshift-ingress-canary/ingress-canary-gfrwv" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.557567 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-r75qz" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.557732 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pffdl\" (UniqueName: \"kubernetes.io/projected/ea19e3ee-138c-4fc9-aa7f-c2c7747b3468-kube-api-access-pffdl\") pod \"control-plane-machine-set-operator-75ffdb6fcd-lckdk\" (UID: \"ea19e3ee-138c-4fc9-aa7f-c2c7747b3468\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-lckdk" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.576431 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gjkxw"] Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.576641 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-59jn5"] Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.576801 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d44nt\" (UniqueName: \"kubernetes.io/projected/1ad6f093-e118-435a-9ebd-f7346da27676-kube-api-access-d44nt\") pod \"dns-default-46x2w\" (UID: \"1ad6f093-e118-435a-9ebd-f7346da27676\") " pod="openshift-dns/dns-default-46x2w" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.579249 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-klq76" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.585214 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x479j\" (UniqueName: \"kubernetes.io/projected/27c8cad2-b082-4ca6-b198-1d9817a2e90e-kube-api-access-x479j\") pod \"catalog-operator-75ff9f647d-wk2r6\" (UID: \"27c8cad2-b082-4ca6-b198-1d9817a2e90e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wk2r6" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.587774 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-46x2w" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.595959 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:45 crc kubenswrapper[5107]: E0126 00:11:45.596258 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:46.096244442 +0000 UTC m=+151.013838788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.621783 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-gfrwv" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.623617 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbvs2\" (UniqueName: \"kubernetes.io/projected/71a4bfc8-fadc-4b4e-90c6-7e93ee88dbe7-kube-api-access-pbvs2\") pod \"packageserver-7d4fc7d867-jxbv4\" (UID: \"71a4bfc8-fadc-4b4e-90c6-7e93ee88dbe7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jxbv4" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.629315 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2ldq5"] Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.629929 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v24np\" (UniqueName: \"kubernetes.io/projected/568b36ce-cb38-401e-afc3-3c6e518c9c1a-kube-api-access-v24np\") pod \"cni-sysctl-allowlist-ds-hvxpc\" (UID: \"568b36ce-cb38-401e-afc3-3c6e518c9c1a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.630294 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rhc6b"] Jan 26 00:11:45 crc kubenswrapper[5107]: W0126 00:11:45.641040 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff2a0637_a303_4291_9db1_a2edaa44d952.slice/crio-08b65e52dff5558b5c3152f1e0d868e97f2a7e37bcef2ba5edc041435704d3f4 WatchSource:0}: Error finding container 08b65e52dff5558b5c3152f1e0d868e97f2a7e37bcef2ba5edc041435704d3f4: Status 404 returned error can't find the container with id 08b65e52dff5558b5c3152f1e0d868e97f2a7e37bcef2ba5edc041435704d3f4 Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.646567 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhrpd\" (UniqueName: \"kubernetes.io/projected/dd738239-7d02-47f2-aad8-bb51fbe73201-kube-api-access-mhrpd\") pod \"cluster-samples-operator-6b564684c8-6st9d\" (UID: \"dd738239-7d02-47f2-aad8-bb51fbe73201\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6st9d" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.670112 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkfqr\" (UniqueName: \"kubernetes.io/projected/c53ce89a-3e31-41ac-96d2-c4326f044986-kube-api-access-qkfqr\") pod \"csi-hostpathplugin-7mhc8\" (UID: \"c53ce89a-3e31-41ac-96d2-c4326f044986\") " pod="hostpath-provisioner/csi-hostpathplugin-7mhc8" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.685115 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pwh7s" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.692311 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jxbv4" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.697547 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:45 crc kubenswrapper[5107]: E0126 00:11:45.697908 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:46.197872948 +0000 UTC m=+151.115467294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.704836 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f9vts"] Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.711495 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b5528d4-4c62-46d7-89d9-3a6de1a8f546-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-ffjjk\" (UID: \"3b5528d4-4c62-46d7-89d9-3a6de1a8f546\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-ffjjk" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.735616 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.742504 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ae7da3db-5cbd-40ff-adfb-417c0d055042-bound-sa-token\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.744242 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-g5ptf" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.760520 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7w2n\" (UniqueName: \"kubernetes.io/projected/8781666d-7431-4ebc-aa57-0a90d686a8fd-kube-api-access-g7w2n\") pod \"dns-operator-799b87ffcd-bf6bf\" (UID: \"8781666d-7431-4ebc-aa57-0a90d686a8fd\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-bf6bf" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.764883 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.772404 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-bf6bf" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.780169 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-ffjjk" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.782461 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljfn7\" (UniqueName: \"kubernetes.io/projected/3c2a5e43-6a8f-4c5f-95f7-6d5420fbfb1c-kube-api-access-ljfn7\") pod \"service-ca-74545575db-lqkzh\" (UID: \"3c2a5e43-6a8f-4c5f-95f7-6d5420fbfb1c\") " pod="openshift-service-ca/service-ca-74545575db-lqkzh" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.789015 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6st9d" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.794301 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-lckdk" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.802513 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:45 crc kubenswrapper[5107]: E0126 00:11:45.802812 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:46.302800156 +0000 UTC m=+151.220394492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.835316 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wk2r6" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.847491 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-lqkzh" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.867215 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.880407 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-97496"] Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.886930 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-7mzzj"] Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.888369 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2cdrs"] Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.912695 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-96jl7"] Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.913805 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-7mhc8" Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.913979 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:45 crc kubenswrapper[5107]: E0126 00:11:45.930211 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:46.430146254 +0000 UTC m=+151.347740610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.931470 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:45 crc kubenswrapper[5107]: E0126 00:11:45.932016 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:46.431994736 +0000 UTC m=+151.349589142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.933800 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-klq76" event={"ID":"ff2a0637-a303-4291-9db1-a2edaa44d952","Type":"ContainerStarted","Data":"08b65e52dff5558b5c3152f1e0d868e97f2a7e37bcef2ba5edc041435704d3f4"} Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.934443 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2ldq5" event={"ID":"086c90e6-e51d-42dc-be10-5df7ebaa5e16","Type":"ContainerStarted","Data":"a33be2c796cb48f1300e57d77738f98ecf02e83d5ecd906f902279c9a8f15ffa"} Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.935396 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-gg5st" event={"ID":"926c0a09-eb65-428f-9fd5-9c7c6c80799d","Type":"ContainerStarted","Data":"6077568a57a67d5c1ae3e5eef8cc6582c8b13e34042cdff552173880224d3cb5"} Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.936262 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rhc6b" event={"ID":"7c7d5497-9496-4ba6-8f07-95f5d955d403","Type":"ContainerStarted","Data":"da162d3f83beedb1b2ca774129b033b752067eba7d29d1d1ebe5dcf09569ff41"} Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.941975 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-zmswq" event={"ID":"cecc62a2-1a5f-4b0f-95bf-459d1493d1df","Type":"ContainerStarted","Data":"af3de3f1dee5daa4239a262f9b396fc488d0b82edace14305d9def5bbdaf05d8"} Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.942899 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" event={"ID":"d93df320-4284-49f0-b63d-ba8a86943f2e","Type":"ContainerStarted","Data":"405cbb641a4f8745b92212ab993341c6d200e4b1a4c8fb6f258f763afb6975f8"} Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.943956 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gjkxw" event={"ID":"2c023721-040d-42ad-b8f7-6c190a17f193","Type":"ContainerStarted","Data":"3e9550a93e18145cfc0181ebf4289ef86f078ee324da241016b11df205bc0796"} Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.944617 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-mjn4v" event={"ID":"b44ede31-5627-4422-b319-14db754817f4","Type":"ContainerStarted","Data":"b4c24dfb57b165446805ada976b112bf99a97ada1a5eba8c46eccd7f7ff5ee2b"} Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.945256 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29489760-jn9bq" event={"ID":"42d6fb86-e6fd-4b77-b921-d62cd5b6e825","Type":"ContainerStarted","Data":"e07cf9b5690fbada99aa3df74d4ab52a8996875d54186912bb18136ccdf8a62e"} Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.945866 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wwqgc" event={"ID":"6ba61487-45ca-44b7-aaed-0faa630aaa88","Type":"ContainerStarted","Data":"c43caa3f6866dddd44cb5805cfa6e067f9bd9c872bde010b98c12033a90b2c2d"} Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.946472 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" event={"ID":"c1eb51c7-ee2f-4230-929d-62d6608eca89","Type":"ContainerStarted","Data":"c29715f66068f5dd64bc0ad1202d0278a8092895116c00a5fe223fbdff71310a"} Jan 26 00:11:45 crc kubenswrapper[5107]: I0126 00:11:45.947041 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-n2dtl" event={"ID":"936adeed-5876-49da-b102-8187f5bc998a","Type":"ContainerStarted","Data":"0a541c0923a50036d719716b27aab9feec114db2789d76f0b93d81f0aca5a5cf"} Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.033861 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:46 crc kubenswrapper[5107]: E0126 00:11:46.034350 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:46.534332332 +0000 UTC m=+151.451926678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:46 crc kubenswrapper[5107]: W0126 00:11:46.048444 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod873d11a3_8ce7_483a_9496_18ce7ddc339c.slice/crio-adeebec6a70daae073e6e125d50532db8e305622996291b0d1c088c69ac06eee WatchSource:0}: Error finding container adeebec6a70daae073e6e125d50532db8e305622996291b0d1c088c69ac06eee: Status 404 returned error can't find the container with id adeebec6a70daae073e6e125d50532db8e305622996291b0d1c088c69ac06eee Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.136705 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:46 crc kubenswrapper[5107]: E0126 00:11:46.137172 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:46.637151831 +0000 UTC m=+151.554746227 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.192992 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-g92zh"] Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.193028 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-r75qz"] Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.193062 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-46x2w"] Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.250570 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:46 crc kubenswrapper[5107]: E0126 00:11:46.252498 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:46.752474282 +0000 UTC m=+151.670068628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.312686 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-gfrwv"] Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.315757 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-9vz6c"] Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.354587 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:46 crc kubenswrapper[5107]: E0126 00:11:46.354922 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:46.85491036 +0000 UTC m=+151.772504706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.356179 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pwh7s"] Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.444611 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-7fc7h"] Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.456383 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:46 crc kubenswrapper[5107]: E0126 00:11:46.456502 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:46.956478934 +0000 UTC m=+151.874073280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.456960 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:46 crc kubenswrapper[5107]: E0126 00:11:46.457304 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:46.957295607 +0000 UTC m=+151.874889953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.559116 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:46 crc kubenswrapper[5107]: E0126 00:11:46.560788 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:47.060754895 +0000 UTC m=+151.978349241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.660853 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:46 crc kubenswrapper[5107]: E0126 00:11:46.661503 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:47.161482255 +0000 UTC m=+152.079076601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.762258 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.762596 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6st9d"] Jan 26 00:11:46 crc kubenswrapper[5107]: E0126 00:11:46.762673 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:47.262654538 +0000 UTC m=+152.180248884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.874254 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:46 crc kubenswrapper[5107]: E0126 00:11:46.878021 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:47.378000739 +0000 UTC m=+152.295595085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.957957 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-xqx9c"] Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.958574 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-46x2w" event={"ID":"1ad6f093-e118-435a-9ebd-f7346da27676","Type":"ContainerStarted","Data":"6a7a3a9cfbb0e4a5614c00de8639abee8a7dd9f5c14edac52ba9f2a4fb2df923"} Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.959868 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" event={"ID":"568b36ce-cb38-401e-afc3-3c6e518c9c1a","Type":"ContainerStarted","Data":"1111a711ef028a1578b1e7f4d0c79072e7669296557a9cb959e0d63672fce082"} Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.961437 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-gfrwv" event={"ID":"bd500fea-ccff-4a18-98ca-449906eac69c","Type":"ContainerStarted","Data":"1d003131003a6a6cdc22163be6d2cd3c019a13423135f8d0830e74a18f76d82f"} Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.962503 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-7mzzj" event={"ID":"0470d1dc-849c-40d7-9a25-efb425c4e111","Type":"ContainerStarted","Data":"247b29a05a1dca0f5a5f09c1914b693a5db349b73afbce483171b9d3050e1587"} Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.964067 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-r75qz" event={"ID":"39ce6772-6cb0-4cfd-afaa-47f5a73ede25","Type":"ContainerStarted","Data":"9675f59213b79348287b0386cce2f72c2aa5aae7432d1c2d1ea1aaad30007dee"} Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.965279 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2cdrs" event={"ID":"1a708af6-a88c-47e1-85cf-8512edab0a65","Type":"ContainerStarted","Data":"f800381a31f6247e3d660785e5d4e1c2fb4653e4141953c509cb025ab505e55b"} Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.966651 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w" event={"ID":"1dcc8c3a-74e3-404d-8f0f-cec0001cf476","Type":"ContainerStarted","Data":"c93edab0e1b84ff6fe1bc7b000e105b7938a9e1b56a420096bde0c8c5f498109"} Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.968244 5107 generic.go:358] "Generic (PLEG): container finished" podID="b18dee05-6423-4857-95c5-63d2a976e19f" containerID="8fc9aa4a2c8c0a8780cb03f0842b15eae12b2cbf237fd99a2e240d936b871dd1" exitCode=0 Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.968311 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" event={"ID":"b18dee05-6423-4857-95c5-63d2a976e19f","Type":"ContainerDied","Data":"8fc9aa4a2c8c0a8780cb03f0842b15eae12b2cbf237fd99a2e240d936b871dd1"} Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.970740 5107 generic.go:358] "Generic (PLEG): container finished" podID="cecc62a2-1a5f-4b0f-95bf-459d1493d1df" containerID="af3de3f1dee5daa4239a262f9b396fc488d0b82edace14305d9def5bbdaf05d8" exitCode=0 Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.970813 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-zmswq" event={"ID":"cecc62a2-1a5f-4b0f-95bf-459d1493d1df","Type":"ContainerDied","Data":"af3de3f1dee5daa4239a262f9b396fc488d0b82edace14305d9def5bbdaf05d8"} Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.975254 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.975504 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pwh7s" event={"ID":"e39cba7d-bc11-44ab-a079-c2b873d17ef9","Type":"ContainerStarted","Data":"0de10c180aaf1bfa764725c5da65f57cf43120b591b17cedff8c17360f2e7db6"} Jan 26 00:11:46 crc kubenswrapper[5107]: E0126 00:11:46.975787 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:47.475764777 +0000 UTC m=+152.393359123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.976712 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-7fc7h" event={"ID":"55c6d87d-ae3b-4818-b6ea-d00e1a453c20","Type":"ContainerStarted","Data":"94bb4296890ebffc431f727f04425b4da5bee384219b754ccd613468bcd76e83"} Jan 26 00:11:46 crc kubenswrapper[5107]: W0126 00:11:46.977648 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21ff8993_d52d_4dcc_a520_c1f46e8e1c6f.slice/crio-ed2084bc9870c1310826629d82c0aa3474379e557f3750fca2d8062ee2e2f3a6 WatchSource:0}: Error finding container ed2084bc9870c1310826629d82c0aa3474379e557f3750fca2d8062ee2e2f3a6: Status 404 returned error can't find the container with id ed2084bc9870c1310826629d82c0aa3474379e557f3750fca2d8062ee2e2f3a6 Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.978362 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-g92zh" event={"ID":"6b577373-c7f0-4128-953f-e221abc2d09b","Type":"ContainerStarted","Data":"be4ccf5996b8feded8da52d8845d3016d97f5a0f832bde361893c73bdf21591d"} Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.979734 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-97496" event={"ID":"873d11a3-8ce7-483a-9496-18ce7ddc339c","Type":"ContainerStarted","Data":"adeebec6a70daae073e6e125d50532db8e305622996291b0d1c088c69ac06eee"} Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.981433 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" event={"ID":"80801f36-b03c-44af-bbaa-4e9a962f9a30","Type":"ContainerStarted","Data":"d73f9acf3362721e4cd1f1c6465e737e19da1569426de322b8969cf54f8be620"} Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.983179 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f9vts" event={"ID":"7ff95d2f-84b0-4ead-ab7d-65268a250ede","Type":"ContainerStarted","Data":"0fd0920e6241447a9fc66a25e9d69b1d11cb7f0cbc15496a365d9085699d0f92"} Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.984039 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96jl7" event={"ID":"be2bed85-ec40-4cd3-bf51-8e7ed0111e6f","Type":"ContainerStarted","Data":"9ac9a2327ca9e2d4c26931368b7581f40fc79f21cd625b1a9148860e3cc81c8b"} Jan 26 00:11:46 crc kubenswrapper[5107]: I0126 00:11:46.984976 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-9vz6c" event={"ID":"b2d6954a-8d0a-453a-9f0f-0051f612d78b","Type":"ContainerStarted","Data":"c34f0ba69a1801861ea82649ca9b2329301449c9ceac662c272ce12d5fdb5e7a"} Jan 26 00:11:47 crc kubenswrapper[5107]: I0126 00:11:47.052880 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-lqkzh"] Jan 26 00:11:47 crc kubenswrapper[5107]: I0126 00:11:47.077044 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:47 crc kubenswrapper[5107]: E0126 00:11:47.077403 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:47.577386342 +0000 UTC m=+152.494980688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:47 crc kubenswrapper[5107]: I0126 00:11:47.101079 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-4gmk9" podStartSLOduration=127.101051977 podStartE2EDuration="2m7.101051977s" podCreationTimestamp="2026-01-26 00:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:47.086253341 +0000 UTC m=+152.003847697" watchObservedRunningTime="2026-01-26 00:11:47.101051977 +0000 UTC m=+152.018646323" Jan 26 00:11:47 crc kubenswrapper[5107]: I0126 00:11:47.110817 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wk2r6"] Jan 26 00:11:47 crc kubenswrapper[5107]: I0126 00:11:47.177780 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:47 crc kubenswrapper[5107]: E0126 00:11:47.178177 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:47.678158434 +0000 UTC m=+152.595752780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:47 crc kubenswrapper[5107]: I0126 00:11:47.181982 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jxbv4"] Jan 26 00:11:47 crc kubenswrapper[5107]: I0126 00:11:47.190206 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-bf6bf"] Jan 26 00:11:47 crc kubenswrapper[5107]: I0126 00:11:47.195461 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-ffjjk"] Jan 26 00:11:47 crc kubenswrapper[5107]: I0126 00:11:47.199939 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-lckdk"] Jan 26 00:11:47 crc kubenswrapper[5107]: I0126 00:11:47.269242 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7mhc8"] Jan 26 00:11:47 crc kubenswrapper[5107]: I0126 00:11:47.279636 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:47 crc kubenswrapper[5107]: E0126 00:11:47.280173 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:47.78014744 +0000 UTC m=+152.697741846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:47 crc kubenswrapper[5107]: I0126 00:11:47.282093 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489760-g5ptf"] Jan 26 00:11:47 crc kubenswrapper[5107]: I0126 00:11:47.380798 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:47 crc kubenswrapper[5107]: E0126 00:11:47.381192 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:47.881174129 +0000 UTC m=+152.798768475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:47 crc kubenswrapper[5107]: W0126 00:11:47.416034 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8781666d_7431_4ebc_aa57_0a90d686a8fd.slice/crio-0add2d1485c47ae3dd2b2e011cfea33eb0e74154d47e8af1a1078c6363fe4711 WatchSource:0}: Error finding container 0add2d1485c47ae3dd2b2e011cfea33eb0e74154d47e8af1a1078c6363fe4711: Status 404 returned error can't find the container with id 0add2d1485c47ae3dd2b2e011cfea33eb0e74154d47e8af1a1078c6363fe4711 Jan 26 00:11:47 crc kubenswrapper[5107]: I0126 00:11:47.419216 5107 ???:1] "http: TLS handshake error from 192.168.126.11:38186: no serving certificate available for the kubelet" Jan 26 00:11:47 crc kubenswrapper[5107]: I0126 00:11:47.483717 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:47 crc kubenswrapper[5107]: E0126 00:11:47.484748 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:47.984732939 +0000 UTC m=+152.902327285 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:47 crc kubenswrapper[5107]: I0126 00:11:47.507409 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-lpd5s" podStartSLOduration=126.507386426 podStartE2EDuration="2m6.507386426s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:47.503190458 +0000 UTC m=+152.420784804" watchObservedRunningTime="2026-01-26 00:11:47.507386426 +0000 UTC m=+152.424980762" Jan 26 00:11:47 crc kubenswrapper[5107]: I0126 00:11:47.518741 5107 ???:1] "http: TLS handshake error from 192.168.126.11:38200: no serving certificate available for the kubelet" Jan 26 00:11:47 crc kubenswrapper[5107]: I0126 00:11:47.571241 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-6wxtb" podStartSLOduration=127.571215249 podStartE2EDuration="2m7.571215249s" podCreationTimestamp="2026-01-26 00:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:47.567523016 +0000 UTC m=+152.485117362" watchObservedRunningTime="2026-01-26 00:11:47.571215249 +0000 UTC m=+152.488809595" Jan 26 00:11:47 crc kubenswrapper[5107]: I0126 00:11:47.585265 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:47 crc kubenswrapper[5107]: E0126 00:11:47.586028 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:48.086010955 +0000 UTC m=+153.003605301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:47 crc kubenswrapper[5107]: I0126 00:11:47.688544 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:47 crc kubenswrapper[5107]: E0126 00:11:47.689312 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:48.189290937 +0000 UTC m=+153.106885283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:47 crc kubenswrapper[5107]: I0126 00:11:47.789685 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:47 crc kubenswrapper[5107]: E0126 00:11:47.790289 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:48.290266245 +0000 UTC m=+153.207860591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:47 crc kubenswrapper[5107]: I0126 00:11:47.891703 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:47 crc kubenswrapper[5107]: E0126 00:11:47.892112 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:48.392098266 +0000 UTC m=+153.309692612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:47 crc kubenswrapper[5107]: I0126 00:11:47.991806 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-64rgr" event={"ID":"4498876a-5953-499f-aa71-6899b8529dcf","Type":"ContainerStarted","Data":"5041f82636ad9985627f247050509b672b7374047431fb966605ec3ca0acfb7d"} Jan 26 00:11:47 crc kubenswrapper[5107]: I0126 00:11:47.992414 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:47 crc kubenswrapper[5107]: E0126 00:11:47.992963 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:48.49294411 +0000 UTC m=+153.410538456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:47 crc kubenswrapper[5107]: I0126 00:11:47.994016 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29489760-jn9bq" event={"ID":"42d6fb86-e6fd-4b77-b921-d62cd5b6e825","Type":"ContainerStarted","Data":"25c753829a3ea671654968aa55a9066bc67cd29dcef2e5f1416e65017e329236"} Jan 26 00:11:47 crc kubenswrapper[5107]: I0126 00:11:47.994995 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-lckdk" event={"ID":"ea19e3ee-138c-4fc9-aa7f-c2c7747b3468","Type":"ContainerStarted","Data":"ed837a1d2ae37eaba12c558dce03f674c05ea2720da02cde58615f2f03b32271"} Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.036992 5107 ???:1] "http: TLS handshake error from 192.168.126.11:38216: no serving certificate available for the kubelet" Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.059848 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" event={"ID":"add7b84d-7f90-4850-9568-c7f3755404ca","Type":"ContainerStarted","Data":"bba18a804437199383eb27ff1dca012e3f69242c84868f4683f8caffbcb76b03"} Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.085727 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-64rgr" Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.153150 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:48 crc kubenswrapper[5107]: E0126 00:11:48.153546 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:48.653526513 +0000 UTC m=+153.571120869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.158977 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-64rgr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.159038 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-64rgr" podUID="4498876a-5953-499f-aa71-6899b8529dcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.181583 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29489760-jn9bq" podStartSLOduration=128.181564231 podStartE2EDuration="2m8.181564231s" podCreationTimestamp="2026-01-26 00:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:48.180021067 +0000 UTC m=+153.097615413" watchObservedRunningTime="2026-01-26 00:11:48.181564231 +0000 UTC m=+153.099158577" Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.190716 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-ffjjk" event={"ID":"3b5528d4-4c62-46d7-89d9-3a6de1a8f546","Type":"ContainerStarted","Data":"e6f9725ff94cd363b69e5bf65b78eb5e3e6a25935b9163e3a49a1a314d772d08"} Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.190771 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wk2r6" event={"ID":"27c8cad2-b082-4ca6-b198-1d9817a2e90e","Type":"ContainerStarted","Data":"68f02a25071b4930b46b10f5dd9c93e6005b952218c6698b8ea0a8b776e7a106"} Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.190788 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-lqkzh" event={"ID":"3c2a5e43-6a8f-4c5f-95f7-6d5420fbfb1c","Type":"ContainerStarted","Data":"8f2c25b436017f681ffe4c504eee4bb068291e8566dab6fef8abdfd93b8c217c"} Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.190802 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6st9d" event={"ID":"dd738239-7d02-47f2-aad8-bb51fbe73201","Type":"ContainerStarted","Data":"6e43dfaa77012b4f1892634ef0e714a82d3c624494965492648180af9f56a111"} Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.192986 5107 ???:1] "http: TLS handshake error from 192.168.126.11:38218: no serving certificate available for the kubelet" Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.192999 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-bf6bf" event={"ID":"8781666d-7431-4ebc-aa57-0a90d686a8fd","Type":"ContainerStarted","Data":"0add2d1485c47ae3dd2b2e011cfea33eb0e74154d47e8af1a1078c6363fe4711"} Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.197576 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jxbv4" event={"ID":"71a4bfc8-fadc-4b4e-90c6-7e93ee88dbe7","Type":"ContainerStarted","Data":"1edd6a4df1f773b479ae337abbb27fd40a956372f971ca4555ef9ccdb04159a2"} Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.204935 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" podStartSLOduration=127.204912197 podStartE2EDuration="2m7.204912197s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:48.203188838 +0000 UTC m=+153.120783184" watchObservedRunningTime="2026-01-26 00:11:48.204912197 +0000 UTC m=+153.122506543" Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.211943 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7mhc8" event={"ID":"c53ce89a-3e31-41ac-96d2-c4326f044986","Type":"ContainerStarted","Data":"c465145d460f48f4588ce285679021a144470009e7a0374de71133d96d2554df"} Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.214586 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" event={"ID":"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f","Type":"ContainerStarted","Data":"ed2084bc9870c1310826629d82c0aa3474379e557f3750fca2d8062ee2e2f3a6"} Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.217596 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-g5ptf" event={"ID":"8e75356d-8170-4619-9539-ea5e50c2b892","Type":"ContainerStarted","Data":"7adfe1f469e4ad9ecf95d86cb40bc94453e78257c4d6e5278be75771edf01805"} Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.221915 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-64rgr" podStartSLOduration=127.221894814 podStartE2EDuration="2m7.221894814s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:48.219052354 +0000 UTC m=+153.136646700" watchObservedRunningTime="2026-01-26 00:11:48.221894814 +0000 UTC m=+153.139489160" Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.254284 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:48 crc kubenswrapper[5107]: E0126 00:11:48.254438 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:48.754416208 +0000 UTC m=+153.672010554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.254753 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:48 crc kubenswrapper[5107]: E0126 00:11:48.255219 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:48.75519663 +0000 UTC m=+153.672790986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.284943 5107 ???:1] "http: TLS handshake error from 192.168.126.11:38226: no serving certificate available for the kubelet" Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.337041 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w" Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.360203 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.364781 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w" podStartSLOduration=127.364756108 podStartE2EDuration="2m7.364756108s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:48.353820901 +0000 UTC m=+153.271415277" watchObservedRunningTime="2026-01-26 00:11:48.364756108 +0000 UTC m=+153.282350464" Jan 26 00:11:48 crc kubenswrapper[5107]: E0126 00:11:48.365331 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:48.865288363 +0000 UTC m=+153.782882719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.395195 5107 ???:1] "http: TLS handshake error from 192.168.126.11:38242: no serving certificate available for the kubelet" Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.396012 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w" Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.466748 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:48 crc kubenswrapper[5107]: E0126 00:11:48.467247 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:48.967230678 +0000 UTC m=+153.884825104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.481029 5107 ???:1] "http: TLS handshake error from 192.168.126.11:38244: no serving certificate available for the kubelet" Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.568146 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:48 crc kubenswrapper[5107]: E0126 00:11:48.568355 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.068325569 +0000 UTC m=+153.985919915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.568926 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:48 crc kubenswrapper[5107]: E0126 00:11:48.569384 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.069352218 +0000 UTC m=+153.986946574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.606416 5107 ???:1] "http: TLS handshake error from 192.168.126.11:38250: no serving certificate available for the kubelet" Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.670578 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:48 crc kubenswrapper[5107]: E0126 00:11:48.671126 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.171104857 +0000 UTC m=+154.088699203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:48 crc kubenswrapper[5107]: I0126 00:11:48.989218 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:48 crc kubenswrapper[5107]: E0126 00:11:48.989698 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.489684239 +0000 UTC m=+154.407278585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5107]: E0126 00:11:49.091019 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.590990776 +0000 UTC m=+154.508585122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.091344 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.091869 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:49 crc kubenswrapper[5107]: E0126 00:11:49.092360 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.592343114 +0000 UTC m=+154.509937460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.196811 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:49 crc kubenswrapper[5107]: E0126 00:11:49.197429 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.697405116 +0000 UTC m=+154.614999472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.311525 5107 ???:1] "http: TLS handshake error from 192.168.126.11:38262: no serving certificate available for the kubelet" Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.324618 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:49 crc kubenswrapper[5107]: E0126 00:11:49.325555 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.825535417 +0000 UTC m=+154.743129753 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.373431 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-mjn4v" event={"ID":"b44ede31-5627-4422-b319-14db754817f4","Type":"ContainerStarted","Data":"6e5832a78275281b37601652adea0a404e919088f1d71689067ffb7ba64ddbe8"} Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.374600 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-mjn4v" Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.378939 5107 patch_prober.go:28] interesting pod/console-operator-67c89758df-mjn4v container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.379018 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-mjn4v" podUID="b44ede31-5627-4422-b319-14db754817f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.381719 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wwqgc" event={"ID":"6ba61487-45ca-44b7-aaed-0faa630aaa88","Type":"ContainerStarted","Data":"f7aacf8b66a139e6e485f258bc220b6ba1d85aa95e1f609be8d953bdb3ee92c4"} Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.427455 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:49 crc kubenswrapper[5107]: E0126 00:11:49.428703 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.928684514 +0000 UTC m=+154.846278860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.457121 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" event={"ID":"c1eb51c7-ee2f-4230-929d-62d6608eca89","Type":"ContainerStarted","Data":"c552ee5e0a5ac231f695f5a2a0838b3e4acd7e8bab123274e6c43d2ef07f5fef"} Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.458048 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.459477 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-7mzzj" event={"ID":"0470d1dc-849c-40d7-9a25-efb425c4e111","Type":"ContainerStarted","Data":"08ed100345c3f04c43e9f5e5107bd0776db3843731ca2068e315e94ece8b332f"} Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.502351 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-mjn4v" podStartSLOduration=129.502332124 podStartE2EDuration="2m9.502332124s" podCreationTimestamp="2026-01-26 00:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:49.497073526 +0000 UTC m=+154.414667872" watchObservedRunningTime="2026-01-26 00:11:49.502332124 +0000 UTC m=+154.419926470" Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.511092 5107 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-wsw2x container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.511174 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" podUID="c1eb51c7-ee2f-4230-929d-62d6608eca89" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.520533 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-n2dtl" event={"ID":"936adeed-5876-49da-b102-8187f5bc998a","Type":"ContainerStarted","Data":"dd69a4d5f84a02bf7f81b78f407a3e85e9d8de30da557e9419b082125793ad9f"} Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.528839 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:49 crc kubenswrapper[5107]: E0126 00:11:49.529430 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.029413735 +0000 UTC m=+154.947008081 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.549538 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-wwqgc" podStartSLOduration=129.5495124 podStartE2EDuration="2m9.5495124s" podCreationTimestamp="2026-01-26 00:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:49.538397628 +0000 UTC m=+154.455991994" watchObservedRunningTime="2026-01-26 00:11:49.5495124 +0000 UTC m=+154.467106756" Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.559214 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2ldq5" event={"ID":"086c90e6-e51d-42dc-be10-5df7ebaa5e16","Type":"ContainerStarted","Data":"494e93d86c5590d05b4dcccb59455b8368f3b1c152a13321121295d100e91903"} Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.617599 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-7mzzj" podStartSLOduration=128.617575002 podStartE2EDuration="2m8.617575002s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:49.572213998 +0000 UTC m=+154.489808344" watchObservedRunningTime="2026-01-26 00:11:49.617575002 +0000 UTC m=+154.535169358" Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.619816 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" event={"ID":"d93df320-4284-49f0-b63d-ba8a86943f2e","Type":"ContainerStarted","Data":"fb056342c376f0f8e441027f13f024c742b5377e6f69864030fadb560fb90a89"} Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.620712 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.625232 5107 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-59jn5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.625296 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" podUID="d93df320-4284-49f0-b63d-ba8a86943f2e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.630019 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:49 crc kubenswrapper[5107]: E0126 00:11:49.630355 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.130317051 +0000 UTC m=+155.047911397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.632848 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" podStartSLOduration=129.632812621 podStartE2EDuration="2m9.632812621s" podCreationTimestamp="2026-01-26 00:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:49.613962931 +0000 UTC m=+154.531557297" watchObservedRunningTime="2026-01-26 00:11:49.632812621 +0000 UTC m=+154.550406977" Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.636830 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f9vts" event={"ID":"7ff95d2f-84b0-4ead-ab7d-65268a250ede","Type":"ContainerStarted","Data":"0cf3843386f7715aa3690596285805ef5154f7da3319f5e1cbd9b61ce5944f58"} Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.653622 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gjkxw" event={"ID":"2c023721-040d-42ad-b8f7-6c190a17f193","Type":"ContainerStarted","Data":"e83e480f960f288a91e9c4488dd2ff82ba5245f220318e42e20998b5e809d7eb"} Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.675851 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-klq76" event={"ID":"ff2a0637-a303-4291-9db1-a2edaa44d952","Type":"ContainerStarted","Data":"c1de7c59ddc2253c70320b028be1cd72e4692cd456ddf9b6e830a99e5bab2ab4"} Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.683420 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-2ldq5" podStartSLOduration=128.683398062 podStartE2EDuration="2m8.683398062s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:49.680780359 +0000 UTC m=+154.598374705" watchObservedRunningTime="2026-01-26 00:11:49.683398062 +0000 UTC m=+154.600992418" Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.684103 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-n2dtl" podStartSLOduration=129.684094672 podStartE2EDuration="2m9.684094672s" podCreationTimestamp="2026-01-26 00:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:49.645004843 +0000 UTC m=+154.562599189" watchObservedRunningTime="2026-01-26 00:11:49.684094672 +0000 UTC m=+154.601689018" Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.737487 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.741382 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-klq76" podStartSLOduration=9.741359131 podStartE2EDuration="9.741359131s" podCreationTimestamp="2026-01-26 00:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:49.713367074 +0000 UTC m=+154.630961420" watchObservedRunningTime="2026-01-26 00:11:49.741359131 +0000 UTC m=+154.658953477" Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.741932 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" podStartSLOduration=128.741923057 podStartE2EDuration="2m8.741923057s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:49.738408738 +0000 UTC m=+154.656003084" watchObservedRunningTime="2026-01-26 00:11:49.741923057 +0000 UTC m=+154.659517403" Jan 26 00:11:49 crc kubenswrapper[5107]: E0126 00:11:49.742013 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.241990949 +0000 UTC m=+155.159585295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.768494 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-gg5st" event={"ID":"926c0a09-eb65-428f-9fd5-9c7c6c80799d","Type":"ContainerStarted","Data":"37a7dc743a48576f19f678ea6a06b760b92a9aa9ed408a232535a982d054b333"} Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.770006 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-64rgr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.770073 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-64rgr" podUID="4498876a-5953-499f-aa71-6899b8529dcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.771254 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-gg5st" Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.773161 5107 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-gg5st container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.773222 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-gg5st" podUID="926c0a09-eb65-428f-9fd5-9c7c6c80799d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.840595 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:49 crc kubenswrapper[5107]: E0126 00:11:49.842235 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.342198905 +0000 UTC m=+155.259793261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.855413 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-gg5st" podStartSLOduration=128.855384315 podStartE2EDuration="2m8.855384315s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:49.840374933 +0000 UTC m=+154.757969279" watchObservedRunningTime="2026-01-26 00:11:49.855384315 +0000 UTC m=+154.772978651" Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.857507 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-gjkxw" podStartSLOduration=128.857487804 podStartE2EDuration="2m8.857487804s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:49.798596199 +0000 UTC m=+154.716190555" watchObservedRunningTime="2026-01-26 00:11:49.857487804 +0000 UTC m=+154.775082160" Jan 26 00:11:49 crc kubenswrapper[5107]: I0126 00:11:49.944920 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:49 crc kubenswrapper[5107]: E0126 00:11:49.946265 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.446246168 +0000 UTC m=+155.363840604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:50 crc kubenswrapper[5107]: I0126 00:11:50.053356 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:50 crc kubenswrapper[5107]: E0126 00:11:50.054548 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.554516731 +0000 UTC m=+155.472111077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:50 crc kubenswrapper[5107]: I0126 00:11:50.202471 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:50 crc kubenswrapper[5107]: E0126 00:11:50.203114 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.703095676 +0000 UTC m=+155.620690022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:50 crc kubenswrapper[5107]: I0126 00:11:50.304598 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:50 crc kubenswrapper[5107]: E0126 00:11:50.304792 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.804767363 +0000 UTC m=+155.722361709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:50 crc kubenswrapper[5107]: I0126 00:11:50.305114 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:50 crc kubenswrapper[5107]: E0126 00:11:50.306227 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.805845633 +0000 UTC m=+155.723439979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:50 crc kubenswrapper[5107]: I0126 00:11:50.405991 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:50 crc kubenswrapper[5107]: E0126 00:11:50.406580 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.906560734 +0000 UTC m=+155.824155080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:50 crc kubenswrapper[5107]: I0126 00:11:50.509522 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:50 crc kubenswrapper[5107]: E0126 00:11:50.510507 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:51.010483014 +0000 UTC m=+155.928077360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:50 crc kubenswrapper[5107]: I0126 00:11:50.612285 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:50 crc kubenswrapper[5107]: E0126 00:11:50.612650 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:51.112626904 +0000 UTC m=+156.030221240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:50 crc kubenswrapper[5107]: I0126 00:11:50.727032 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:50 crc kubenswrapper[5107]: E0126 00:11:50.727606 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:51.227591925 +0000 UTC m=+156.145186271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:50 crc kubenswrapper[5107]: I0126 00:11:50.770666 5107 ???:1] "http: TLS handshake error from 192.168.126.11:38270: no serving certificate available for the kubelet" Jan 26 00:11:50 crc kubenswrapper[5107]: I0126 00:11:50.884309 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:50 crc kubenswrapper[5107]: E0126 00:11:50.884690 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:51.384667382 +0000 UTC m=+156.302261738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:50.987182 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:51 crc kubenswrapper[5107]: E0126 00:11:50.987763 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:51.487746291 +0000 UTC m=+156.405340637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.088514 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:51 crc kubenswrapper[5107]: E0126 00:11:51.089210 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:51.589191515 +0000 UTC m=+156.506785861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.207984 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:51 crc kubenswrapper[5107]: E0126 00:11:51.208628 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:51.708608142 +0000 UTC m=+156.626202488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.209071 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-g5ptf" event={"ID":"8e75356d-8170-4619-9539-ea5e50c2b892","Type":"ContainerStarted","Data":"8a5b1377344c442bbe489c80f3b7f0a7c74cb6841bc2b430c0cc917a4dc94ad7"} Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.216226 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-lckdk" event={"ID":"ea19e3ee-138c-4fc9-aa7f-c2c7747b3468","Type":"ContainerStarted","Data":"8d4c60e50d56d50fea8c9907e2e6ca695a1608304f3e3a0b862ddbbc13f4db70"} Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.238786 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-46x2w" event={"ID":"1ad6f093-e118-435a-9ebd-f7346da27676","Type":"ContainerStarted","Data":"91a2a962113dc3939c5c9794ea269dd9c0d3d434f859aeb9a89cdda1b1177b82"} Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.282716 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-g5ptf" podStartSLOduration=131.282698964 podStartE2EDuration="2m11.282698964s" podCreationTimestamp="2026-01-26 00:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:51.282436936 +0000 UTC m=+156.200031282" watchObservedRunningTime="2026-01-26 00:11:51.282698964 +0000 UTC m=+156.200293310" Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.305975 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" event={"ID":"568b36ce-cb38-401e-afc3-3c6e518c9c1a","Type":"ContainerStarted","Data":"ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806"} Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.306368 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.313776 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:51 crc kubenswrapper[5107]: E0126 00:11:51.315801 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:51.815781083 +0000 UTC m=+156.733375429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.359995 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-gfrwv" event={"ID":"bd500fea-ccff-4a18-98ca-449906eac69c","Type":"ContainerStarted","Data":"8da8936b505740681433c8e775fa7df0538a7c62e7a597441f41b5fbe05a37fa"} Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.364213 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-lckdk" podStartSLOduration=130.364178323 podStartE2EDuration="2m10.364178323s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:51.36051061 +0000 UTC m=+156.278104966" watchObservedRunningTime="2026-01-26 00:11:51.364178323 +0000 UTC m=+156.281772669" Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.415080 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:51 crc kubenswrapper[5107]: E0126 00:11:51.416571 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:51.916548725 +0000 UTC m=+156.834143071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.433688 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pwh7s" event={"ID":"e39cba7d-bc11-44ab-a079-c2b873d17ef9","Type":"ContainerStarted","Data":"39773665baa552043673dacb25007ba424e77a03e349f20c8511b8950b17abc8"} Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.475347 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-gfrwv" podStartSLOduration=11.475325126 podStartE2EDuration="11.475325126s" podCreationTimestamp="2026-01-26 00:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:51.438389379 +0000 UTC m=+156.355983735" watchObservedRunningTime="2026-01-26 00:11:51.475325126 +0000 UTC m=+156.392919472" Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.516665 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:51 crc kubenswrapper[5107]: E0126 00:11:51.517376 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.017342117 +0000 UTC m=+156.934936503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.523946 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rhc6b" event={"ID":"7c7d5497-9496-4ba6-8f07-95f5d955d403","Type":"ContainerStarted","Data":"8cf682cb8a3e38109d109d07322e94f8fe2926d8b15c043057fae6c263d552fc"} Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.545294 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" podStartSLOduration=11.545270852 podStartE2EDuration="11.545270852s" podCreationTimestamp="2026-01-26 00:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:51.542671709 +0000 UTC m=+156.460266065" watchObservedRunningTime="2026-01-26 00:11:51.545270852 +0000 UTC m=+156.462865198" Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.575657 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-zmswq" event={"ID":"cecc62a2-1a5f-4b0f-95bf-459d1493d1df","Type":"ContainerStarted","Data":"73e862e076f9de47d58860e167f31cdffd87c68efa6df93b264b09e8411c1a83"} Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.590778 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-zmswq" Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.608235 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.613205 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-7fc7h" event={"ID":"55c6d87d-ae3b-4818-b6ea-d00e1a453c20","Type":"ContainerStarted","Data":"611fb93fd0eb17f002deea378f021b928e3550a06065057b6119c2974fa6e9fc"} Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.619667 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:51 crc kubenswrapper[5107]: E0126 00:11:51.620185 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.120167157 +0000 UTC m=+157.037761503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.648835 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6st9d" event={"ID":"dd738239-7d02-47f2-aad8-bb51fbe73201","Type":"ContainerStarted","Data":"9fa7798fe42961496e4bf458bd104c89d58e47c2e476a49f69c4fe41628f50be"} Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.692140 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-r75qz" event={"ID":"39ce6772-6cb0-4cfd-afaa-47f5a73ede25","Type":"ContainerStarted","Data":"ee1261823d1c5ed351745f8254e901858c23d2eaa79060d3e72e9b75ad109103"} Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.702735 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-rhc6b" podStartSLOduration=130.702711076 podStartE2EDuration="2m10.702711076s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:51.612345487 +0000 UTC m=+156.529939843" watchObservedRunningTime="2026-01-26 00:11:51.702711076 +0000 UTC m=+156.620305422" Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.723422 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:51 crc kubenswrapper[5107]: E0126 00:11:51.724098 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.224076196 +0000 UTC m=+157.141670542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.771962 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-zmswq" podStartSLOduration=131.771945732 podStartE2EDuration="2m11.771945732s" podCreationTimestamp="2026-01-26 00:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:51.771155959 +0000 UTC m=+156.688750325" watchObservedRunningTime="2026-01-26 00:11:51.771945732 +0000 UTC m=+156.689540078" Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.825120 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:51 crc kubenswrapper[5107]: E0126 00:11:51.831981 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.331953938 +0000 UTC m=+157.249548284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.842966 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jxbv4" event={"ID":"71a4bfc8-fadc-4b4e-90c6-7e93ee88dbe7","Type":"ContainerStarted","Data":"c9b8f7cc692782fa0b1a9f545348a3d0bb4480a1b46e3e06c4c9a718c0d62e98"} Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.843721 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jxbv4" Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.882708 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-r75qz" podStartSLOduration=130.882689734 podStartE2EDuration="2m10.882689734s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:51.881847 +0000 UTC m=+156.799441366" watchObservedRunningTime="2026-01-26 00:11:51.882689734 +0000 UTC m=+156.800284090" Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.886990 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96jl7" event={"ID":"be2bed85-ec40-4cd3-bf51-8e7ed0111e6f","Type":"ContainerStarted","Data":"889c191a847e3312d8463910a9989b090c59289e678b319b6c77b7af026fbedc"} Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.952089 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:51 crc kubenswrapper[5107]: E0126 00:11:51.953330 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.453307848 +0000 UTC m=+157.370902194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.982519 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.991625 5107 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-jxbv4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" start-of-body= Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.991718 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jxbv4" podUID="71a4bfc8-fadc-4b4e-90c6-7e93ee88dbe7" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.991810 5107 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-xqx9c container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 26 00:11:51 crc kubenswrapper[5107]: I0126 00:11:51.991869 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" podUID="21ff8993-d52d-4dcc-a520-c1f46e8e1c6f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.011579 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-97496" event={"ID":"873d11a3-8ce7-483a-9496-18ce7ddc339c","Type":"ContainerStarted","Data":"85416cf4da0b981111d29fbbe3ff6aabac07ca4e1ed545adec42747f7f619b0e"} Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.053342 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:52 crc kubenswrapper[5107]: E0126 00:11:52.057545 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.557517666 +0000 UTC m=+157.475112012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.065148 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-g92zh" event={"ID":"6b577373-c7f0-4128-953f-e221abc2d09b","Type":"ContainerStarted","Data":"d804798ece115955e651a5956f3a9d45df4e3471750a679b520b2f69b6e6bc0f"} Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.074093 5107 patch_prober.go:28] interesting pod/console-operator-67c89758df-mjn4v container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.074194 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-mjn4v" podUID="b44ede31-5627-4422-b319-14db754817f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.075752 5107 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-59jn5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.075858 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" podUID="d93df320-4284-49f0-b63d-ba8a86943f2e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.076514 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2cdrs" podStartSLOduration=131.07649041 podStartE2EDuration="2m11.07649041s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:52.074817533 +0000 UTC m=+156.992411889" watchObservedRunningTime="2026-01-26 00:11:52.07649041 +0000 UTC m=+156.994084756" Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.076558 5107 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-wsw2x container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.080219 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" podUID="c1eb51c7-ee2f-4230-929d-62d6608eca89" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.083211 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-gg5st" Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.155589 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:52 crc kubenswrapper[5107]: E0126 00:11:52.156344 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.656320263 +0000 UTC m=+157.573914619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.273124 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.274218 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" podStartSLOduration=131.274196565 podStartE2EDuration="2m11.274196565s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:52.27080162 +0000 UTC m=+157.188395976" watchObservedRunningTime="2026-01-26 00:11:52.274196565 +0000 UTC m=+157.191790911" Jan 26 00:11:52 crc kubenswrapper[5107]: E0126 00:11:52.296728 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.796708368 +0000 UTC m=+157.714302804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.376470 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:52 crc kubenswrapper[5107]: E0126 00:11:52.377449 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.877418856 +0000 UTC m=+157.795013202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.483783 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:52 crc kubenswrapper[5107]: E0126 00:11:52.484384 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.984366701 +0000 UTC m=+157.901961047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.485137 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.585821 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:52 crc kubenswrapper[5107]: E0126 00:11:52.588406 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.088385104 +0000 UTC m=+158.005979450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.625366 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-96jl7" podStartSLOduration=131.625344833 podStartE2EDuration="2m11.625344833s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:52.534657414 +0000 UTC m=+157.452251760" watchObservedRunningTime="2026-01-26 00:11:52.625344833 +0000 UTC m=+157.542939179" Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.630460 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-hvxpc"] Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.633306 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jxbv4" podStartSLOduration=131.633279166 podStartE2EDuration="2m11.633279166s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:52.623371017 +0000 UTC m=+157.540965373" watchObservedRunningTime="2026-01-26 00:11:52.633279166 +0000 UTC m=+157.550873512" Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.688667 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:52 crc kubenswrapper[5107]: E0126 00:11:52.691754 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.191732198 +0000 UTC m=+158.109326544 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.793133 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:52 crc kubenswrapper[5107]: E0126 00:11:52.793372 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.293341624 +0000 UTC m=+158.210935970 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.793519 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:52 crc kubenswrapper[5107]: E0126 00:11:52.794197 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.294189958 +0000 UTC m=+158.211784304 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.894786 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:52 crc kubenswrapper[5107]: E0126 00:11:52.894988 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.394963769 +0000 UTC m=+158.312558115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.895409 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:52 crc kubenswrapper[5107]: E0126 00:11:52.895771 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.395762582 +0000 UTC m=+158.313356928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.972799 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.972873 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.986875 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.997293 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:52 crc kubenswrapper[5107]: E0126 00:11:52.997555 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.49749395 +0000 UTC m=+158.415088296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5107]: I0126 00:11:52.997667 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:52 crc kubenswrapper[5107]: E0126 00:11:52.998130 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.498111057 +0000 UTC m=+158.415705403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.105090 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:53 crc kubenswrapper[5107]: E0126 00:11:53.105487 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.605459563 +0000 UTC m=+158.523053909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.105767 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:53 crc kubenswrapper[5107]: E0126 00:11:53.106291 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.606281907 +0000 UTC m=+158.523876253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.135200 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-lqkzh" event={"ID":"3c2a5e43-6a8f-4c5f-95f7-6d5420fbfb1c","Type":"ContainerStarted","Data":"9882d229206167fc13a36ed9c32e08b23bf42bf7d0c28a243b5aa9ec05bd5a3c"} Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.170603 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-lqkzh" podStartSLOduration=132.170570033 podStartE2EDuration="2m12.170570033s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:53.167823536 +0000 UTC m=+158.085417902" watchObservedRunningTime="2026-01-26 00:11:53.170570033 +0000 UTC m=+158.088164399" Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.183633 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2cdrs" event={"ID":"1a708af6-a88c-47e1-85cf-8512edab0a65","Type":"ContainerStarted","Data":"bffbf0da0b7f021ca50b2e2a0a961628c011ca81fe663b11dc50aae2f644d6fc"} Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.208567 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:53 crc kubenswrapper[5107]: E0126 00:11:53.210403 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.710378452 +0000 UTC m=+158.627972818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.215515 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-bf6bf" event={"ID":"8781666d-7431-4ebc-aa57-0a90d686a8fd","Type":"ContainerStarted","Data":"dd084fa048c527f82a5897ce1f640874f33813b3808b55602adadc2a9fa5974f"} Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.249090 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" event={"ID":"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f","Type":"ContainerStarted","Data":"f35ee95538134827987f7dee88a5ce9aa2a91555f1ffea857329a5fa6d6b8175"} Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.250165 5107 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-xqx9c container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.250232 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" podUID="21ff8993-d52d-4dcc-a520-c1f46e8e1c6f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.261209 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" event={"ID":"80801f36-b03c-44af-bbaa-4e9a962f9a30","Type":"ContainerStarted","Data":"c03890adb3d6c05c793b2f070f32f7ce36f69b304c3a6f3fcb67c937bdb05bd1"} Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.261224 5107 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-wsw2x container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.261599 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" podUID="c1eb51c7-ee2f-4230-929d-62d6608eca89" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.263817 5107 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-jxbv4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" start-of-body= Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.263866 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jxbv4" podUID="71a4bfc8-fadc-4b4e-90c6-7e93ee88dbe7" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.272449 5107 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-zmswq container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.272514 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-zmswq" podUID="cecc62a2-1a5f-4b0f-95bf-459d1493d1df" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.273079 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-kq9jq" Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.296374 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-4gmk9" Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.297250 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-4gmk9" Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.300435 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" podStartSLOduration=132.300410282 podStartE2EDuration="2m12.300410282s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:53.295783242 +0000 UTC m=+158.213377608" watchObservedRunningTime="2026-01-26 00:11:53.300410282 +0000 UTC m=+158.218004638" Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.301086 5107 patch_prober.go:28] interesting pod/console-64d44f6ddf-4gmk9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.301150 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-4gmk9" podUID="1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.313124 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:53 crc kubenswrapper[5107]: E0126 00:11:53.318601 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.818581782 +0000 UTC m=+158.736176228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.424025 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:53 crc kubenswrapper[5107]: E0126 00:11:53.424597 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.924573031 +0000 UTC m=+158.842167377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.424742 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:53 crc kubenswrapper[5107]: E0126 00:11:53.425912 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.925878748 +0000 UTC m=+158.843473214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.526064 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:53 crc kubenswrapper[5107]: E0126 00:11:53.526656 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.026636799 +0000 UTC m=+158.944231145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.627825 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:53 crc kubenswrapper[5107]: E0126 00:11:53.628358 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.128332397 +0000 UTC m=+159.045926803 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.641913 5107 ???:1] "http: TLS handshake error from 192.168.126.11:59002: no serving certificate available for the kubelet" Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.728949 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:53 crc kubenswrapper[5107]: E0126 00:11:53.729120 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.229090318 +0000 UTC m=+159.146684664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.729645 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:53 crc kubenswrapper[5107]: E0126 00:11:53.729963 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.229953582 +0000 UTC m=+159.147547928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.737188 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.738797 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mbr9b container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.738873 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" podUID="80801f36-b03c-44af-bbaa-4e9a962f9a30" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.747750 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-64rgr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.747833 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-64rgr" podUID="4498876a-5953-499f-aa71-6899b8529dcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.839658 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:53 crc kubenswrapper[5107]: E0126 00:11:53.840216 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.34017657 +0000 UTC m=+159.257770916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.840731 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:53 crc kubenswrapper[5107]: E0126 00:11:53.841345 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.341326762 +0000 UTC m=+159.258921118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.942050 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:53 crc kubenswrapper[5107]: E0126 00:11:53.942782 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.442740062 +0000 UTC m=+159.360334408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.992896 5107 patch_prober.go:28] interesting pod/console-operator-67c89758df-mjn4v container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 26 00:11:53 crc kubenswrapper[5107]: I0126 00:11:53.992988 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-67c89758df-mjn4v" podUID="b44ede31-5627-4422-b319-14db754817f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.049824 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:54 crc kubenswrapper[5107]: E0126 00:11:54.050531 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.55051526 +0000 UTC m=+159.468109606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.151393 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:54 crc kubenswrapper[5107]: E0126 00:11:54.151512 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.651485728 +0000 UTC m=+159.569080094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.151732 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:54 crc kubenswrapper[5107]: E0126 00:11:54.152302 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.65229161 +0000 UTC m=+159.569885956 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.253396 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:54 crc kubenswrapper[5107]: E0126 00:11:54.254113 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.754077621 +0000 UTC m=+159.671671967 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.254546 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:54 crc kubenswrapper[5107]: E0126 00:11:54.255283 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.755264194 +0000 UTC m=+159.672858610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.312325 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f9vts" event={"ID":"7ff95d2f-84b0-4ead-ab7d-65268a250ede","Type":"ContainerStarted","Data":"b8b60382490cf711b075eb4bd2346b13eef8ed0f5ec1667a77628da83d5d6452"} Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.325326 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6st9d" event={"ID":"dd738239-7d02-47f2-aad8-bb51fbe73201","Type":"ContainerStarted","Data":"04f6b1ff870441055f97d222874d6c72aababcd72347852bc3d969dfbea9d508"} Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.345188 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-bf6bf" event={"ID":"8781666d-7431-4ebc-aa57-0a90d686a8fd","Type":"ContainerStarted","Data":"e9c4ebe1a81d7b3c66d0c42d10d985d525b167b61cdaacbbec0749d8198c0ab2"} Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.350751 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-97496" event={"ID":"873d11a3-8ce7-483a-9496-18ce7ddc339c","Type":"ContainerStarted","Data":"3d8f8ee77acba49182b2c16fbc803cbde81de27dd198b14e43dca098560dce9b"} Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.363102 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:54 crc kubenswrapper[5107]: E0126 00:11:54.364006 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.863961868 +0000 UTC m=+159.781556214 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.372107 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-g92zh" event={"ID":"6b577373-c7f0-4128-953f-e221abc2d09b","Type":"ContainerStarted","Data":"ff3702a2cbc8e078ef417c32853fa2b4b347d0fe7b140370e07041d602e41585"} Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.377650 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-9vz6c" event={"ID":"b2d6954a-8d0a-453a-9f0f-0051f612d78b","Type":"ContainerStarted","Data":"ad6a4f0cad5b924558854123f00e2ecce0a6f270988df1d0c0f51ee64bf2f2e1"} Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.377711 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-9vz6c" event={"ID":"b2d6954a-8d0a-453a-9f0f-0051f612d78b","Type":"ContainerStarted","Data":"9fddb441786959877f38910a6c6001bd81c96d31f22e30d1e780a1dd4fc4ef65"} Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.384599 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f9vts" podStartSLOduration=133.384575258 podStartE2EDuration="2m13.384575258s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:54.379794583 +0000 UTC m=+159.297388959" watchObservedRunningTime="2026-01-26 00:11:54.384575258 +0000 UTC m=+159.302169604" Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.395813 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-46x2w" event={"ID":"1ad6f093-e118-435a-9ebd-f7346da27676","Type":"ContainerStarted","Data":"a0d5b6771ea84896decdf26df142ad1426ec3ab05e5007f54e95044c5439e512"} Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.396664 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-46x2w" Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.398870 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-ffjjk" event={"ID":"3b5528d4-4c62-46d7-89d9-3a6de1a8f546","Type":"ContainerStarted","Data":"3dcd2c8160a632ec295ac6e6136ee4da59b708e7c8d4f422915f137b925269b3"} Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.420324 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wk2r6" event={"ID":"27c8cad2-b082-4ca6-b198-1d9817a2e90e","Type":"ContainerStarted","Data":"fb3c04c4bdd9a459c64c86492f8b7cc9c9801ede506467de40057a4efd61421a"} Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.421548 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wk2r6" Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.422736 5107 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-wk2r6 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.422802 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wk2r6" podUID="27c8cad2-b082-4ca6-b198-1d9817a2e90e" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.428814 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" event={"ID":"b18dee05-6423-4857-95c5-63d2a976e19f","Type":"ContainerStarted","Data":"7c18b6bb5d97796f81934c279d5309af281feb71eeed7952c54c8dc0b58534f7"} Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.439867 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pwh7s" event={"ID":"e39cba7d-bc11-44ab-a079-c2b873d17ef9","Type":"ContainerStarted","Data":"ff9723b61156b9db994a0bc5884ae8cf383d0510c17d43aef1c8833d8bbfc634"} Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.451739 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pwh7s" Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.495534 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:54 crc kubenswrapper[5107]: E0126 00:11:54.497723 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.997705757 +0000 UTC m=+159.915300103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.570878 5107 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-59jn5 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.570967 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" podUID="d93df320-4284-49f0-b63d-ba8a86943f2e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.573376 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-7fc7h" event={"ID":"55c6d87d-ae3b-4818-b6ea-d00e1a453c20","Type":"ContainerStarted","Data":"de534308269b2d0fce0bb75eb93fd06283ed6f3d8c8b2f181076192117941124"} Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.573583 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" podUID="568b36ce-cb38-401e-afc3-3c6e518c9c1a" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806" gracePeriod=30 Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.578507 5107 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-xqx9c container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.578540 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" podUID="21ff8993-d52d-4dcc-a520-c1f46e8e1c6f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.603984 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:54 crc kubenswrapper[5107]: E0126 00:11:54.605388 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.105365572 +0000 UTC m=+160.022959928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.636832 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-bf6bf" podStartSLOduration=133.636807305 podStartE2EDuration="2m13.636807305s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:54.439987485 +0000 UTC m=+159.357581851" watchObservedRunningTime="2026-01-26 00:11:54.636807305 +0000 UTC m=+159.554401661" Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.697967 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-g92zh" podStartSLOduration=133.697946473 podStartE2EDuration="2m13.697946473s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:54.695231677 +0000 UTC m=+159.612826023" watchObservedRunningTime="2026-01-26 00:11:54.697946473 +0000 UTC m=+159.615540819" Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.698733 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-9vz6c" podStartSLOduration=133.698723045 podStartE2EDuration="2m13.698723045s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:54.64267821 +0000 UTC m=+159.560272576" watchObservedRunningTime="2026-01-26 00:11:54.698723045 +0000 UTC m=+159.616317391" Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.708266 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:54 crc kubenswrapper[5107]: E0126 00:11:54.711943 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.211917046 +0000 UTC m=+160.129511392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.762093 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mbr9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:11:54 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Jan 26 00:11:54 crc kubenswrapper[5107]: [+]process-running ok Jan 26 00:11:54 crc kubenswrapper[5107]: healthz check failed Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.762181 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" podUID="80801f36-b03c-44af-bbaa-4e9a962f9a30" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.786170 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6st9d" podStartSLOduration=134.786149432 podStartE2EDuration="2m14.786149432s" podCreationTimestamp="2026-01-26 00:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:54.784670991 +0000 UTC m=+159.702265347" watchObservedRunningTime="2026-01-26 00:11:54.786149432 +0000 UTC m=+159.703743778" Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.813365 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:54 crc kubenswrapper[5107]: E0126 00:11:54.814014 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.313991445 +0000 UTC m=+160.231585791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.890818 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" podStartSLOduration=134.890792993 podStartE2EDuration="2m14.890792993s" podCreationTimestamp="2026-01-26 00:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:54.888274372 +0000 UTC m=+159.805868738" watchObservedRunningTime="2026-01-26 00:11:54.890792993 +0000 UTC m=+159.808387359" Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.891524 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-97496" podStartSLOduration=133.891515553 podStartE2EDuration="2m13.891515553s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:54.826025253 +0000 UTC m=+159.743619609" watchObservedRunningTime="2026-01-26 00:11:54.891515553 +0000 UTC m=+159.809109909" Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.916093 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:54 crc kubenswrapper[5107]: E0126 00:11:54.916677 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.416661139 +0000 UTC m=+160.334255495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.942499 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-7fc7h" podStartSLOduration=133.942477135 podStartE2EDuration="2m13.942477135s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:54.939749678 +0000 UTC m=+159.857344034" watchObservedRunningTime="2026-01-26 00:11:54.942477135 +0000 UTC m=+159.860071481" Jan 26 00:11:54 crc kubenswrapper[5107]: I0126 00:11:54.971211 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-46x2w" podStartSLOduration=14.971191672 podStartE2EDuration="14.971191672s" podCreationTimestamp="2026-01-26 00:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:54.968956249 +0000 UTC m=+159.886550625" watchObservedRunningTime="2026-01-26 00:11:54.971191672 +0000 UTC m=+159.888786028" Jan 26 00:11:55 crc kubenswrapper[5107]: I0126 00:11:55.022083 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:55 crc kubenswrapper[5107]: E0126 00:11:55.022299 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.522253727 +0000 UTC m=+160.439848073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5107]: I0126 00:11:55.023013 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:55 crc kubenswrapper[5107]: I0126 00:11:55.023320 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-ffjjk" podStartSLOduration=134.023292056 podStartE2EDuration="2m14.023292056s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:55.02201492 +0000 UTC m=+159.939609266" watchObservedRunningTime="2026-01-26 00:11:55.023292056 +0000 UTC m=+159.940886412" Jan 26 00:11:55 crc kubenswrapper[5107]: E0126 00:11:55.023388 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.523378728 +0000 UTC m=+160.440973084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5107]: I0126 00:11:55.149933 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:55 crc kubenswrapper[5107]: E0126 00:11:55.150123 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.650090889 +0000 UTC m=+160.567685235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5107]: I0126 00:11:55.150745 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:55 crc kubenswrapper[5107]: E0126 00:11:55.151170 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.651151699 +0000 UTC m=+160.568746105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5107]: I0126 00:11:55.170414 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wk2r6" podStartSLOduration=134.1703979 podStartE2EDuration="2m14.1703979s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:55.169933937 +0000 UTC m=+160.087528283" watchObservedRunningTime="2026-01-26 00:11:55.1703979 +0000 UTC m=+160.087992246" Jan 26 00:11:55 crc kubenswrapper[5107]: I0126 00:11:55.238948 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pwh7s" podStartSLOduration=134.238926375 podStartE2EDuration="2m14.238926375s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:55.238340489 +0000 UTC m=+160.155934845" watchObservedRunningTime="2026-01-26 00:11:55.238926375 +0000 UTC m=+160.156520731" Jan 26 00:11:55 crc kubenswrapper[5107]: I0126 00:11:55.252249 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:55 crc kubenswrapper[5107]: E0126 00:11:55.252468 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.752444855 +0000 UTC m=+160.670039201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5107]: I0126 00:11:55.252635 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:55 crc kubenswrapper[5107]: E0126 00:11:55.253120 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.753102604 +0000 UTC m=+160.670696950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5107]: I0126 00:11:55.355996 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:55 crc kubenswrapper[5107]: E0126 00:11:55.356559 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.85653294 +0000 UTC m=+160.774127286 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5107]: I0126 00:11:55.457673 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:55 crc kubenswrapper[5107]: E0126 00:11:55.458104 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.958084034 +0000 UTC m=+160.875678460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5107]: I0126 00:11:55.559797 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:55 crc kubenswrapper[5107]: E0126 00:11:55.560110 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.06006511 +0000 UTC m=+160.977659456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5107]: I0126 00:11:55.560235 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:55 crc kubenswrapper[5107]: E0126 00:11:55.561437 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.061425018 +0000 UTC m=+160.979019364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5107]: I0126 00:11:55.580161 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" event={"ID":"b18dee05-6423-4857-95c5-63d2a976e19f","Type":"ContainerStarted","Data":"07f20a5f62ff00f4ff1c3dcbdc49cf02d62edd408d60b5dbe1db4b78ccc615d3"} Jan 26 00:11:55 crc kubenswrapper[5107]: I0126 00:11:55.584205 5107 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-wk2r6 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 26 00:11:55 crc kubenswrapper[5107]: I0126 00:11:55.584270 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wk2r6" podUID="27c8cad2-b082-4ca6-b198-1d9817a2e90e" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 26 00:11:55 crc kubenswrapper[5107]: I0126 00:11:55.662673 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:55 crc kubenswrapper[5107]: E0126 00:11:55.663047 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.162985572 +0000 UTC m=+161.080579928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5107]: I0126 00:11:55.663749 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:55 crc kubenswrapper[5107]: E0126 00:11:55.664212 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.164193485 +0000 UTC m=+161.081788021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5107]: I0126 00:11:55.745176 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" Jan 26 00:11:55 crc kubenswrapper[5107]: I0126 00:11:55.811361 5107 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-zmswq container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 26 00:11:55 crc kubenswrapper[5107]: I0126 00:11:55.811399 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:55 crc kubenswrapper[5107]: I0126 00:11:55.811455 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-5777786469-zmswq" podUID="cecc62a2-1a5f-4b0f-95bf-459d1493d1df" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 26 00:11:55 crc kubenswrapper[5107]: E0126 00:11:55.813198 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.313175482 +0000 UTC m=+161.230769848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5107]: I0126 00:11:55.814716 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:55 crc kubenswrapper[5107]: E0126 00:11:55.815174 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.315149588 +0000 UTC m=+161.232744144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5107]: I0126 00:11:55.826757 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mbr9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:11:55 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Jan 26 00:11:55 crc kubenswrapper[5107]: [+]process-running ok Jan 26 00:11:55 crc kubenswrapper[5107]: healthz check failed Jan 26 00:11:55 crc kubenswrapper[5107]: I0126 00:11:55.826937 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" podUID="80801f36-b03c-44af-bbaa-4e9a962f9a30" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:11:55 crc kubenswrapper[5107]: I0126 00:11:55.835908 5107 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-wk2r6 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 26 00:11:55 crc kubenswrapper[5107]: I0126 00:11:55.835999 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wk2r6" podUID="27c8cad2-b082-4ca6-b198-1d9817a2e90e" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 26 00:11:55 crc kubenswrapper[5107]: I0126 00:11:55.916870 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:55 crc kubenswrapper[5107]: E0126 00:11:55.917046 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.41700939 +0000 UTC m=+161.334603746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5107]: I0126 00:11:55.917494 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:55 crc kubenswrapper[5107]: E0126 00:11:55.917824 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.417814673 +0000 UTC m=+161.335409019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:56 crc kubenswrapper[5107]: I0126 00:11:56.019210 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:56 crc kubenswrapper[5107]: E0126 00:11:56.019327 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.519303995 +0000 UTC m=+161.436898341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:56 crc kubenswrapper[5107]: I0126 00:11:56.019822 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:56 crc kubenswrapper[5107]: E0126 00:11:56.020160 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.520148328 +0000 UTC m=+161.437742674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:56 crc kubenswrapper[5107]: I0126 00:11:56.121117 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:56 crc kubenswrapper[5107]: E0126 00:11:56.121541 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.621522447 +0000 UTC m=+161.539116793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:56 crc kubenswrapper[5107]: I0126 00:11:56.223036 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:56 crc kubenswrapper[5107]: E0126 00:11:56.223513 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.723497252 +0000 UTC m=+161.641091598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:56 crc kubenswrapper[5107]: I0126 00:11:56.263406 5107 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-zmswq container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 26 00:11:56 crc kubenswrapper[5107]: I0126 00:11:56.263506 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-zmswq" podUID="cecc62a2-1a5f-4b0f-95bf-459d1493d1df" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 26 00:11:56 crc kubenswrapper[5107]: I0126 00:11:56.324324 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:56 crc kubenswrapper[5107]: E0126 00:11:56.324589 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.824553792 +0000 UTC m=+161.742148148 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:56 crc kubenswrapper[5107]: I0126 00:11:56.324863 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:56 crc kubenswrapper[5107]: E0126 00:11:56.325177 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.825159139 +0000 UTC m=+161.742753485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:56 crc kubenswrapper[5107]: I0126 00:11:56.437920 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:56 crc kubenswrapper[5107]: E0126 00:11:56.438535 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.938514185 +0000 UTC m=+161.856108531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:56 crc kubenswrapper[5107]: I0126 00:11:56.540226 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:56 crc kubenswrapper[5107]: E0126 00:11:56.540609 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:57.040593302 +0000 UTC m=+161.958187648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:56 crc kubenswrapper[5107]: I0126 00:11:56.587419 5107 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-wk2r6 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 26 00:11:56 crc kubenswrapper[5107]: I0126 00:11:56.587505 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wk2r6" podUID="27c8cad2-b082-4ca6-b198-1d9817a2e90e" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 26 00:11:56 crc kubenswrapper[5107]: I0126 00:11:56.641874 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:56 crc kubenswrapper[5107]: E0126 00:11:56.642930 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:57.142907837 +0000 UTC m=+162.060502183 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:56 crc kubenswrapper[5107]: I0126 00:11:56.695658 5107 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-jxbv4 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.24:5443/healthz\": context deadline exceeded" start-of-body= Jan 26 00:11:56 crc kubenswrapper[5107]: I0126 00:11:56.695956 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jxbv4" podUID="71a4bfc8-fadc-4b4e-90c6-7e93ee88dbe7" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.24:5443/healthz\": context deadline exceeded" Jan 26 00:11:56 crc kubenswrapper[5107]: I0126 00:11:56.740805 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mbr9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:11:56 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Jan 26 00:11:56 crc kubenswrapper[5107]: [+]process-running ok Jan 26 00:11:56 crc kubenswrapper[5107]: healthz check failed Jan 26 00:11:56 crc kubenswrapper[5107]: I0126 00:11:56.740911 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" podUID="80801f36-b03c-44af-bbaa-4e9a962f9a30" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:11:56 crc kubenswrapper[5107]: I0126 00:11:56.744253 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:56 crc kubenswrapper[5107]: E0126 00:11:56.744769 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:57.244748689 +0000 UTC m=+162.162343095 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:56 crc kubenswrapper[5107]: I0126 00:11:56.845608 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:56 crc kubenswrapper[5107]: E0126 00:11:56.845852 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:57.345815469 +0000 UTC m=+162.263409825 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:57 crc kubenswrapper[5107]: I0126 00:11:57.051105 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:57 crc kubenswrapper[5107]: E0126 00:11:57.051653 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:57.551636133 +0000 UTC m=+162.469230479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:57 crc kubenswrapper[5107]: I0126 00:11:57.155358 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:57 crc kubenswrapper[5107]: E0126 00:11:57.155533 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:57.655503332 +0000 UTC m=+162.573097678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:57 crc kubenswrapper[5107]: I0126 00:11:57.155619 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:57 crc kubenswrapper[5107]: E0126 00:11:57.156001 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:57.655987905 +0000 UTC m=+162.573582251 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:57 crc kubenswrapper[5107]: I0126 00:11:57.262856 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:57 crc kubenswrapper[5107]: E0126 00:11:57.263607 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:57.763588909 +0000 UTC m=+162.681183255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:57 crc kubenswrapper[5107]: I0126 00:11:57.365570 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:57 crc kubenswrapper[5107]: E0126 00:11:57.365945 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:57.865928115 +0000 UTC m=+162.783522461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:57 crc kubenswrapper[5107]: I0126 00:11:57.467245 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:57 crc kubenswrapper[5107]: E0126 00:11:57.467487 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:57.967453657 +0000 UTC m=+162.885048003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:57 crc kubenswrapper[5107]: I0126 00:11:57.467980 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:57 crc kubenswrapper[5107]: E0126 00:11:57.468334 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:57.968318412 +0000 UTC m=+162.885912788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:57 crc kubenswrapper[5107]: I0126 00:11:57.569717 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:57 crc kubenswrapper[5107]: E0126 00:11:57.570057 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:58.070006549 +0000 UTC m=+162.987600895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:57 crc kubenswrapper[5107]: I0126 00:11:57.570458 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:57 crc kubenswrapper[5107]: E0126 00:11:57.571030 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:58.071003357 +0000 UTC m=+162.988597893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:57 crc kubenswrapper[5107]: I0126 00:11:57.672001 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:57 crc kubenswrapper[5107]: E0126 00:11:57.672837 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:58.172817698 +0000 UTC m=+163.090412044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:57 crc kubenswrapper[5107]: I0126 00:11:57.711528 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7mhc8" event={"ID":"c53ce89a-3e31-41ac-96d2-c4326f044986","Type":"ContainerStarted","Data":"7e410be378f48ef0727c683e09218c78d4490c1806907a4da877b6bc8c0a07ed"} Jan 26 00:11:57 crc kubenswrapper[5107]: I0126 00:11:57.811052 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:57 crc kubenswrapper[5107]: E0126 00:11:57.811590 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:58.311568827 +0000 UTC m=+163.229163173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:57 crc kubenswrapper[5107]: I0126 00:11:57.818611 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mbr9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:11:57 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Jan 26 00:11:57 crc kubenswrapper[5107]: [+]process-running ok Jan 26 00:11:57 crc kubenswrapper[5107]: healthz check failed Jan 26 00:11:57 crc kubenswrapper[5107]: I0126 00:11:57.818760 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" podUID="80801f36-b03c-44af-bbaa-4e9a962f9a30" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:11:57 crc kubenswrapper[5107]: I0126 00:11:57.916299 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:57 crc kubenswrapper[5107]: E0126 00:11:57.916871 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:58.416846796 +0000 UTC m=+163.334441132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:58 crc kubenswrapper[5107]: I0126 00:11:58.018817 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:58 crc kubenswrapper[5107]: E0126 00:11:58.019560 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:58.519536981 +0000 UTC m=+163.437131567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:58 crc kubenswrapper[5107]: I0126 00:11:58.120870 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:58 crc kubenswrapper[5107]: E0126 00:11:58.121080 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:58.621038313 +0000 UTC m=+163.538632649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:58 crc kubenswrapper[5107]: I0126 00:11:58.121697 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:58 crc kubenswrapper[5107]: E0126 00:11:58.122276 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:58.622253708 +0000 UTC m=+163.539848054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:58 crc kubenswrapper[5107]: I0126 00:11:58.224438 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:58 crc kubenswrapper[5107]: E0126 00:11:58.224656 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:58.724618004 +0000 UTC m=+163.642212350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:58 crc kubenswrapper[5107]: I0126 00:11:58.225028 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:58 crc kubenswrapper[5107]: E0126 00:11:58.225674 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:58.725665414 +0000 UTC m=+163.643259750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:58 crc kubenswrapper[5107]: I0126 00:11:58.326930 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:58 crc kubenswrapper[5107]: E0126 00:11:58.327355 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:58.827334911 +0000 UTC m=+163.744929257 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:58 crc kubenswrapper[5107]: I0126 00:11:58.430551 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:58 crc kubenswrapper[5107]: E0126 00:11:58.431249 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:58.93123171 +0000 UTC m=+163.848826056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:58 crc kubenswrapper[5107]: I0126 00:11:58.450538 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 26 00:11:58 crc kubenswrapper[5107]: I0126 00:11:58.532120 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:58 crc kubenswrapper[5107]: E0126 00:11:58.532336 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:59.03229495 +0000 UTC m=+163.949889306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:58 crc kubenswrapper[5107]: I0126 00:11:58.533259 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:58 crc kubenswrapper[5107]: E0126 00:11:58.533707 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:59.033689199 +0000 UTC m=+163.951283545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:58 crc kubenswrapper[5107]: I0126 00:11:58.632144 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 26 00:11:58 crc kubenswrapper[5107]: I0126 00:11:58.632357 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:11:58 crc kubenswrapper[5107]: I0126 00:11:58.681497 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 26 00:11:58 crc kubenswrapper[5107]: I0126 00:11:58.681847 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 26 00:11:58 crc kubenswrapper[5107]: I0126 00:11:58.697130 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:58 crc kubenswrapper[5107]: E0126 00:11:58.697353 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:59.197313587 +0000 UTC m=+164.114907933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:58 crc kubenswrapper[5107]: I0126 00:11:58.697449 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:58 crc kubenswrapper[5107]: E0126 00:11:58.697898 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:59.197861832 +0000 UTC m=+164.115456198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:58 crc kubenswrapper[5107]: I0126 00:11:58.802659 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:58 crc kubenswrapper[5107]: I0126 00:11:58.803005 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mbr9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:11:58 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Jan 26 00:11:58 crc kubenswrapper[5107]: [+]process-running ok Jan 26 00:11:58 crc kubenswrapper[5107]: healthz check failed Jan 26 00:11:58 crc kubenswrapper[5107]: I0126 00:11:58.803222 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:58 crc kubenswrapper[5107]: I0126 00:11:58.803219 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" podUID="80801f36-b03c-44af-bbaa-4e9a962f9a30" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:11:58 crc kubenswrapper[5107]: I0126 00:11:58.803061 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12644171-7711-41d1-9376-76515176916c-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"12644171-7711-41d1-9376-76515176916c\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:11:58 crc kubenswrapper[5107]: E0126 00:11:58.803162 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:59.303141981 +0000 UTC m=+164.220736337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:58 crc kubenswrapper[5107]: I0126 00:11:58.803926 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:58 crc kubenswrapper[5107]: I0126 00:11:58.804037 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/12644171-7711-41d1-9376-76515176916c-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"12644171-7711-41d1-9376-76515176916c\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:11:58 crc kubenswrapper[5107]: I0126 00:11:58.804434 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:11:58 crc kubenswrapper[5107]: E0126 00:11:58.804536 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:59.30452131 +0000 UTC m=+164.222115656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:58 crc kubenswrapper[5107]: I0126 00:11:58.917009 5107 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-flbvs container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.13:8443/livez\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 26 00:11:58 crc kubenswrapper[5107]: I0126 00:11:58.917079 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" podUID="b18dee05-6423-4857-95c5-63d2a976e19f" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.13:8443/livez\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 26 00:11:59 crc kubenswrapper[5107]: I0126 00:11:59.019566 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:59 crc kubenswrapper[5107]: I0126 00:11:59.019727 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/12644171-7711-41d1-9376-76515176916c-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"12644171-7711-41d1-9376-76515176916c\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:11:59 crc kubenswrapper[5107]: I0126 00:11:59.020005 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12644171-7711-41d1-9376-76515176916c-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"12644171-7711-41d1-9376-76515176916c\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:11:59 crc kubenswrapper[5107]: E0126 00:11:59.020414 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:59.520382486 +0000 UTC m=+164.437976852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:59 crc kubenswrapper[5107]: I0126 00:11:59.020486 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/12644171-7711-41d1-9376-76515176916c-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"12644171-7711-41d1-9376-76515176916c\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:11:59 crc kubenswrapper[5107]: I0126 00:11:59.106072 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12644171-7711-41d1-9376-76515176916c-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"12644171-7711-41d1-9376-76515176916c\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:11:59 crc kubenswrapper[5107]: I0126 00:11:59.123869 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:59 crc kubenswrapper[5107]: E0126 00:11:59.124365 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:59.624349747 +0000 UTC m=+164.541944093 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:59 crc kubenswrapper[5107]: I0126 00:11:59.135240 5107 ???:1] "http: TLS handshake error from 192.168.126.11:59004: no serving certificate available for the kubelet" Jan 26 00:11:59 crc kubenswrapper[5107]: I0126 00:11:59.224707 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:59 crc kubenswrapper[5107]: E0126 00:11:59.224825 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:59.72480004 +0000 UTC m=+164.642394386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:59 crc kubenswrapper[5107]: I0126 00:11:59.225035 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:59 crc kubenswrapper[5107]: E0126 00:11:59.225416 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:59.725397417 +0000 UTC m=+164.642991763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:59 crc kubenswrapper[5107]: I0126 00:11:59.301750 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-zmswq" Jan 26 00:11:59 crc kubenswrapper[5107]: I0126 00:11:59.310328 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 26 00:11:59 crc kubenswrapper[5107]: I0126 00:11:59.319767 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:11:59 crc kubenswrapper[5107]: I0126 00:11:59.326570 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:59 crc kubenswrapper[5107]: E0126 00:11:59.326730 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:59.826707413 +0000 UTC m=+164.744301759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:59 crc kubenswrapper[5107]: I0126 00:11:59.326907 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:59 crc kubenswrapper[5107]: E0126 00:11:59.327319 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:59.82730819 +0000 UTC m=+164.744902536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:59 crc kubenswrapper[5107]: I0126 00:11:59.472524 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:59 crc kubenswrapper[5107]: E0126 00:11:59.472849 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:59.972829199 +0000 UTC m=+164.890423545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:59 crc kubenswrapper[5107]: I0126 00:11:59.574013 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:59 crc kubenswrapper[5107]: E0126 00:11:59.574369 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:00.074354743 +0000 UTC m=+164.991949089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:59 crc kubenswrapper[5107]: I0126 00:11:59.675249 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:59 crc kubenswrapper[5107]: E0126 00:11:59.676265 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:00.176237096 +0000 UTC m=+165.093831452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:59 crc kubenswrapper[5107]: I0126 00:11:59.741686 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mbr9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:11:59 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Jan 26 00:11:59 crc kubenswrapper[5107]: [+]process-running ok Jan 26 00:11:59 crc kubenswrapper[5107]: healthz check failed Jan 26 00:11:59 crc kubenswrapper[5107]: I0126 00:11:59.741796 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" podUID="80801f36-b03c-44af-bbaa-4e9a962f9a30" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:11:59 crc kubenswrapper[5107]: W0126 00:11:59.758822 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod12644171_7711_41d1_9376_76515176916c.slice/crio-3e5c2393623883e42dd6075d45431ac8be057fdc09415ea96756da6f907d7e81 WatchSource:0}: Error finding container 3e5c2393623883e42dd6075d45431ac8be057fdc09415ea96756da6f907d7e81: Status 404 returned error can't find the container with id 3e5c2393623883e42dd6075d45431ac8be057fdc09415ea96756da6f907d7e81 Jan 26 00:11:59 crc kubenswrapper[5107]: I0126 00:11:59.774108 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-64rgr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 00:11:59 crc kubenswrapper[5107]: I0126 00:11:59.774222 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-64rgr" podUID="4498876a-5953-499f-aa71-6899b8529dcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 00:11:59 crc kubenswrapper[5107]: I0126 00:11:59.778918 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:59 crc kubenswrapper[5107]: E0126 00:11:59.779404 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:00.279379604 +0000 UTC m=+165.196973950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:59 crc kubenswrapper[5107]: I0126 00:11:59.879987 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:59 crc kubenswrapper[5107]: E0126 00:11:59.880564 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:00.380531526 +0000 UTC m=+165.298125912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:59 crc kubenswrapper[5107]: I0126 00:11:59.981461 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:11:59 crc kubenswrapper[5107]: E0126 00:11:59.982117 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:00.482102701 +0000 UTC m=+165.399697047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.082641 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:00 crc kubenswrapper[5107]: E0126 00:12:00.083058 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:00.583036597 +0000 UTC m=+165.500630943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.184334 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:12:00 crc kubenswrapper[5107]: E0126 00:12:00.184804 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:00.684783155 +0000 UTC m=+165.602377551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.286904 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:00 crc kubenswrapper[5107]: E0126 00:12:00.287613 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:00.787588034 +0000 UTC m=+165.705182390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.389427 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:12:00 crc kubenswrapper[5107]: E0126 00:12:00.390174 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:00.890141446 +0000 UTC m=+165.807735792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.490973 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:00 crc kubenswrapper[5107]: E0126 00:12:00.491119 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:00.991087133 +0000 UTC m=+165.908681479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.491829 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:12:00 crc kubenswrapper[5107]: E0126 00:12:00.492411 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:00.992384879 +0000 UTC m=+165.909979225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.593583 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:00 crc kubenswrapper[5107]: E0126 00:12:00.593808 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:01.093767878 +0000 UTC m=+166.011362234 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.594422 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:12:00 crc kubenswrapper[5107]: E0126 00:12:00.594858 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:01.094838258 +0000 UTC m=+166.012432604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.696211 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:00 crc kubenswrapper[5107]: E0126 00:12:00.696405 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:01.196367681 +0000 UTC m=+166.113962027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.696735 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:12:00 crc kubenswrapper[5107]: E0126 00:12:00.697155 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:01.197139333 +0000 UTC m=+166.114733679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.741984 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mbr9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:00 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Jan 26 00:12:00 crc kubenswrapper[5107]: [+]process-running ok Jan 26 00:12:00 crc kubenswrapper[5107]: healthz check failed Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.742154 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" podUID="80801f36-b03c-44af-bbaa-4e9a962f9a30" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.783502 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.787315 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.788595 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.794109 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.794437 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.794523 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bh5dd"] Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.797678 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.798068 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56c0b45c-2648-462a-90aa-ebee1bb3358e-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"56c0b45c-2648-462a-90aa-ebee1bb3358e\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.798337 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56c0b45c-2648-462a-90aa-ebee1bb3358e-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"56c0b45c-2648-462a-90aa-ebee1bb3358e\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:12:00 crc kubenswrapper[5107]: E0126 00:12:00.798585 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:01.298562312 +0000 UTC m=+166.216156658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.883332 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"12644171-7711-41d1-9376-76515176916c","Type":"ContainerStarted","Data":"3e5c2393623883e42dd6075d45431ac8be057fdc09415ea96756da6f907d7e81"} Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.883639 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bh5dd" Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.883935 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bh5dd"] Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.884211 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gbddn"] Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.887464 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.896337 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gbddn"] Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.896387 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zd5l8"] Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.896608 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gbddn" Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.901670 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqbh2\" (UniqueName: \"kubernetes.io/projected/d71d7360-3eef-4260-b288-7fc9f8d6fecc-kube-api-access-xqbh2\") pod \"community-operators-bh5dd\" (UID: \"d71d7360-3eef-4260-b288-7fc9f8d6fecc\") " pod="openshift-marketplace/community-operators-bh5dd" Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.901802 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d71d7360-3eef-4260-b288-7fc9f8d6fecc-catalog-content\") pod \"community-operators-bh5dd\" (UID: \"d71d7360-3eef-4260-b288-7fc9f8d6fecc\") " pod="openshift-marketplace/community-operators-bh5dd" Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.901847 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56c0b45c-2648-462a-90aa-ebee1bb3358e-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"56c0b45c-2648-462a-90aa-ebee1bb3358e\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.902919 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.903400 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d71d7360-3eef-4260-b288-7fc9f8d6fecc-utilities\") pod \"community-operators-bh5dd\" (UID: \"d71d7360-3eef-4260-b288-7fc9f8d6fecc\") " pod="openshift-marketplace/community-operators-bh5dd" Jan 26 00:12:00 crc kubenswrapper[5107]: E0126 00:12:00.903539 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:01.403524382 +0000 UTC m=+166.321118718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.903972 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56c0b45c-2648-462a-90aa-ebee1bb3358e-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"56c0b45c-2648-462a-90aa-ebee1bb3358e\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.904092 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56c0b45c-2648-462a-90aa-ebee1bb3358e-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"56c0b45c-2648-462a-90aa-ebee1bb3358e\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.904301 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.931954 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zd5l8"] Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.932037 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bfr4w"] Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.932647 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zd5l8" Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.943926 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56c0b45c-2648-462a-90aa-ebee1bb3358e-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"56c0b45c-2648-462a-90aa-ebee1bb3358e\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.954070 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bfr4w"] Jan 26 00:12:00 crc kubenswrapper[5107]: I0126 00:12:00.954419 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bfr4w" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.010362 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:01 crc kubenswrapper[5107]: E0126 00:12:01.010785 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:01.510750125 +0000 UTC m=+166.428344471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.012390 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0c7bec4-aeda-4946-9599-726d61c41d93-catalog-content\") pod \"certified-operators-gbddn\" (UID: \"c0c7bec4-aeda-4946-9599-726d61c41d93\") " pod="openshift-marketplace/certified-operators-gbddn" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.012472 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xqbh2\" (UniqueName: \"kubernetes.io/projected/d71d7360-3eef-4260-b288-7fc9f8d6fecc-kube-api-access-xqbh2\") pod \"community-operators-bh5dd\" (UID: \"d71d7360-3eef-4260-b288-7fc9f8d6fecc\") " pod="openshift-marketplace/community-operators-bh5dd" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.012543 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8ltq\" (UniqueName: \"kubernetes.io/projected/1d8cc2bf-c61e-4f0a-9bee-068919e02489-kube-api-access-f8ltq\") pod \"certified-operators-bfr4w\" (UID: \"1d8cc2bf-c61e-4f0a-9bee-068919e02489\") " pod="openshift-marketplace/certified-operators-bfr4w" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.012671 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/638ad5ba-8cd0-49f3-817d-eb8c75ecc863-utilities\") pod \"community-operators-zd5l8\" (UID: \"638ad5ba-8cd0-49f3-817d-eb8c75ecc863\") " pod="openshift-marketplace/community-operators-zd5l8" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.012753 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d71d7360-3eef-4260-b288-7fc9f8d6fecc-catalog-content\") pod \"community-operators-bh5dd\" (UID: \"d71d7360-3eef-4260-b288-7fc9f8d6fecc\") " pod="openshift-marketplace/community-operators-bh5dd" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.012802 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.012821 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2kg6\" (UniqueName: \"kubernetes.io/projected/c0c7bec4-aeda-4946-9599-726d61c41d93-kube-api-access-c2kg6\") pod \"certified-operators-gbddn\" (UID: \"c0c7bec4-aeda-4946-9599-726d61c41d93\") " pod="openshift-marketplace/certified-operators-gbddn" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.012858 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d71d7360-3eef-4260-b288-7fc9f8d6fecc-utilities\") pod \"community-operators-bh5dd\" (UID: \"d71d7360-3eef-4260-b288-7fc9f8d6fecc\") " pod="openshift-marketplace/community-operators-bh5dd" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.012929 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d8cc2bf-c61e-4f0a-9bee-068919e02489-catalog-content\") pod \"certified-operators-bfr4w\" (UID: \"1d8cc2bf-c61e-4f0a-9bee-068919e02489\") " pod="openshift-marketplace/certified-operators-bfr4w" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.013008 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0c7bec4-aeda-4946-9599-726d61c41d93-utilities\") pod \"certified-operators-gbddn\" (UID: \"c0c7bec4-aeda-4946-9599-726d61c41d93\") " pod="openshift-marketplace/certified-operators-gbddn" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.013071 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d8cc2bf-c61e-4f0a-9bee-068919e02489-utilities\") pod \"certified-operators-bfr4w\" (UID: \"1d8cc2bf-c61e-4f0a-9bee-068919e02489\") " pod="openshift-marketplace/certified-operators-bfr4w" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.013110 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47jzt\" (UniqueName: \"kubernetes.io/projected/638ad5ba-8cd0-49f3-817d-eb8c75ecc863-kube-api-access-47jzt\") pod \"community-operators-zd5l8\" (UID: \"638ad5ba-8cd0-49f3-817d-eb8c75ecc863\") " pod="openshift-marketplace/community-operators-zd5l8" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.013181 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/638ad5ba-8cd0-49f3-817d-eb8c75ecc863-catalog-content\") pod \"community-operators-zd5l8\" (UID: \"638ad5ba-8cd0-49f3-817d-eb8c75ecc863\") " pod="openshift-marketplace/community-operators-zd5l8" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.014072 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d71d7360-3eef-4260-b288-7fc9f8d6fecc-catalog-content\") pod \"community-operators-bh5dd\" (UID: \"d71d7360-3eef-4260-b288-7fc9f8d6fecc\") " pod="openshift-marketplace/community-operators-bh5dd" Jan 26 00:12:01 crc kubenswrapper[5107]: E0126 00:12:01.014294 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:01.514275024 +0000 UTC m=+166.431869370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.014968 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d71d7360-3eef-4260-b288-7fc9f8d6fecc-utilities\") pod \"community-operators-bh5dd\" (UID: \"d71d7360-3eef-4260-b288-7fc9f8d6fecc\") " pod="openshift-marketplace/community-operators-bh5dd" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.049300 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqbh2\" (UniqueName: \"kubernetes.io/projected/d71d7360-3eef-4260-b288-7fc9f8d6fecc-kube-api-access-xqbh2\") pod \"community-operators-bh5dd\" (UID: \"d71d7360-3eef-4260-b288-7fc9f8d6fecc\") " pod="openshift-marketplace/community-operators-bh5dd" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.130622 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.146906 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.147657 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d8cc2bf-c61e-4f0a-9bee-068919e02489-utilities\") pod \"certified-operators-bfr4w\" (UID: \"1d8cc2bf-c61e-4f0a-9bee-068919e02489\") " pod="openshift-marketplace/certified-operators-bfr4w" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.147915 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-47jzt\" (UniqueName: \"kubernetes.io/projected/638ad5ba-8cd0-49f3-817d-eb8c75ecc863-kube-api-access-47jzt\") pod \"community-operators-zd5l8\" (UID: \"638ad5ba-8cd0-49f3-817d-eb8c75ecc863\") " pod="openshift-marketplace/community-operators-zd5l8" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.148242 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/638ad5ba-8cd0-49f3-817d-eb8c75ecc863-catalog-content\") pod \"community-operators-zd5l8\" (UID: \"638ad5ba-8cd0-49f3-817d-eb8c75ecc863\") " pod="openshift-marketplace/community-operators-zd5l8" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.151956 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0c7bec4-aeda-4946-9599-726d61c41d93-catalog-content\") pod \"certified-operators-gbddn\" (UID: \"c0c7bec4-aeda-4946-9599-726d61c41d93\") " pod="openshift-marketplace/certified-operators-gbddn" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.152450 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f8ltq\" (UniqueName: \"kubernetes.io/projected/1d8cc2bf-c61e-4f0a-9bee-068919e02489-kube-api-access-f8ltq\") pod \"certified-operators-bfr4w\" (UID: \"1d8cc2bf-c61e-4f0a-9bee-068919e02489\") " pod="openshift-marketplace/certified-operators-bfr4w" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.152686 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/638ad5ba-8cd0-49f3-817d-eb8c75ecc863-utilities\") pod \"community-operators-zd5l8\" (UID: \"638ad5ba-8cd0-49f3-817d-eb8c75ecc863\") " pod="openshift-marketplace/community-operators-zd5l8" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.155627 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c2kg6\" (UniqueName: \"kubernetes.io/projected/c0c7bec4-aeda-4946-9599-726d61c41d93-kube-api-access-c2kg6\") pod \"certified-operators-gbddn\" (UID: \"c0c7bec4-aeda-4946-9599-726d61c41d93\") " pod="openshift-marketplace/certified-operators-gbddn" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.155814 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d8cc2bf-c61e-4f0a-9bee-068919e02489-catalog-content\") pod \"certified-operators-bfr4w\" (UID: \"1d8cc2bf-c61e-4f0a-9bee-068919e02489\") " pod="openshift-marketplace/certified-operators-bfr4w" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.156676 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0c7bec4-aeda-4946-9599-726d61c41d93-utilities\") pod \"certified-operators-gbddn\" (UID: \"c0c7bec4-aeda-4946-9599-726d61c41d93\") " pod="openshift-marketplace/certified-operators-gbddn" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.157848 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0c7bec4-aeda-4946-9599-726d61c41d93-utilities\") pod \"certified-operators-gbddn\" (UID: \"c0c7bec4-aeda-4946-9599-726d61c41d93\") " pod="openshift-marketplace/certified-operators-gbddn" Jan 26 00:12:01 crc kubenswrapper[5107]: E0126 00:12:01.158094 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:01.658064445 +0000 UTC m=+166.575658791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.158583 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d8cc2bf-c61e-4f0a-9bee-068919e02489-utilities\") pod \"certified-operators-bfr4w\" (UID: \"1d8cc2bf-c61e-4f0a-9bee-068919e02489\") " pod="openshift-marketplace/certified-operators-bfr4w" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.159507 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/638ad5ba-8cd0-49f3-817d-eb8c75ecc863-utilities\") pod \"community-operators-zd5l8\" (UID: \"638ad5ba-8cd0-49f3-817d-eb8c75ecc863\") " pod="openshift-marketplace/community-operators-zd5l8" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.161327 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d8cc2bf-c61e-4f0a-9bee-068919e02489-catalog-content\") pod \"certified-operators-bfr4w\" (UID: \"1d8cc2bf-c61e-4f0a-9bee-068919e02489\") " pod="openshift-marketplace/certified-operators-bfr4w" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.162157 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/638ad5ba-8cd0-49f3-817d-eb8c75ecc863-catalog-content\") pod \"community-operators-zd5l8\" (UID: \"638ad5ba-8cd0-49f3-817d-eb8c75ecc863\") " pod="openshift-marketplace/community-operators-zd5l8" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.162686 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0c7bec4-aeda-4946-9599-726d61c41d93-catalog-content\") pod \"certified-operators-gbddn\" (UID: \"c0c7bec4-aeda-4946-9599-726d61c41d93\") " pod="openshift-marketplace/certified-operators-gbddn" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.211235 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2kg6\" (UniqueName: \"kubernetes.io/projected/c0c7bec4-aeda-4946-9599-726d61c41d93-kube-api-access-c2kg6\") pod \"certified-operators-gbddn\" (UID: \"c0c7bec4-aeda-4946-9599-726d61c41d93\") " pod="openshift-marketplace/certified-operators-gbddn" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.214034 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-47jzt\" (UniqueName: \"kubernetes.io/projected/638ad5ba-8cd0-49f3-817d-eb8c75ecc863-kube-api-access-47jzt\") pod \"community-operators-zd5l8\" (UID: \"638ad5ba-8cd0-49f3-817d-eb8c75ecc863\") " pod="openshift-marketplace/community-operators-zd5l8" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.215398 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bh5dd" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.226160 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8ltq\" (UniqueName: \"kubernetes.io/projected/1d8cc2bf-c61e-4f0a-9bee-068919e02489-kube-api-access-f8ltq\") pod \"certified-operators-bfr4w\" (UID: \"1d8cc2bf-c61e-4f0a-9bee-068919e02489\") " pod="openshift-marketplace/certified-operators-bfr4w" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.232115 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gbddn" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.259425 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:12:01 crc kubenswrapper[5107]: E0126 00:12:01.260661 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:01.760612646 +0000 UTC m=+166.678207172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.284109 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zd5l8" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.305855 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bfr4w" Jan 26 00:12:01 crc kubenswrapper[5107]: E0126 00:12:01.314527 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:01 crc kubenswrapper[5107]: E0126 00:12:01.332024 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:01 crc kubenswrapper[5107]: E0126 00:12:01.334823 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:01 crc kubenswrapper[5107]: E0126 00:12:01.335091 5107 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" podUID="568b36ce-cb38-401e-afc3-3c6e518c9c1a" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.367158 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:01 crc kubenswrapper[5107]: E0126 00:12:01.369051 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:01.868959161 +0000 UTC m=+166.786553507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.470212 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:12:01 crc kubenswrapper[5107]: E0126 00:12:01.470765 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:01.970746881 +0000 UTC m=+166.888341227 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.577605 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:01 crc kubenswrapper[5107]: E0126 00:12:01.578155 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:02.078114618 +0000 UTC m=+166.995708964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:01 crc kubenswrapper[5107]: E0126 00:12:01.579684 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:02.079662432 +0000 UTC m=+166.997256778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.580694 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.586463 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-46x2w" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.682424 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:01 crc kubenswrapper[5107]: E0126 00:12:01.683231 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:02.18317399 +0000 UTC m=+167.100768346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.760085 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mbr9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:01 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Jan 26 00:12:01 crc kubenswrapper[5107]: [+]process-running ok Jan 26 00:12:01 crc kubenswrapper[5107]: healthz check failed Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.760214 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" podUID="80801f36-b03c-44af-bbaa-4e9a962f9a30" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.785503 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:12:01 crc kubenswrapper[5107]: E0126 00:12:01.786150 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:02.286132074 +0000 UTC m=+167.203726420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.874682 5107 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.891466 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:01 crc kubenswrapper[5107]: E0126 00:12:01.891960 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:02.391935157 +0000 UTC m=+167.309529503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.897351 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7mhc8" event={"ID":"c53ce89a-3e31-41ac-96d2-c4326f044986","Type":"ContainerStarted","Data":"81c16e9eebf0099f6925301b05e2ad2eb30fe3fb3133b9e24cb9ccaeee972c22"} Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.904266 5107 generic.go:358] "Generic (PLEG): container finished" podID="8e75356d-8170-4619-9539-ea5e50c2b892" containerID="8a5b1377344c442bbe489c80f3b7f0a7c74cb6841bc2b430c0cc917a4dc94ad7" exitCode=0 Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.904498 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-g5ptf" event={"ID":"8e75356d-8170-4619-9539-ea5e50c2b892","Type":"ContainerDied","Data":"8a5b1377344c442bbe489c80f3b7f0a7c74cb6841bc2b430c0cc917a4dc94ad7"} Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.961503 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 26 00:12:01 crc kubenswrapper[5107]: W0126 00:12:01.980682 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod56c0b45c_2648_462a_90aa_ebee1bb3358e.slice/crio-ce5f75a70249639abd50cfb8ac696a8410a19c3b041a1920a8df88e8b43d6cda WatchSource:0}: Error finding container ce5f75a70249639abd50cfb8ac696a8410a19c3b041a1920a8df88e8b43d6cda: Status 404 returned error can't find the container with id ce5f75a70249639abd50cfb8ac696a8410a19c3b041a1920a8df88e8b43d6cda Jan 26 00:12:01 crc kubenswrapper[5107]: I0126 00:12:01.997092 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:12:01 crc kubenswrapper[5107]: E0126 00:12:01.997765 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:02.49773354 +0000 UTC m=+167.415328066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.073866 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-mjn4v" Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.075675 5107 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-26T00:12:01.875119074Z","UUID":"4ef999d2-250d-4ee0-8dce-cdd3d95ea98f","Handler":null,"Name":"","Endpoint":""} Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.080729 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.097836 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:02 crc kubenswrapper[5107]: E0126 00:12:02.099116 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:02.599072448 +0000 UTC m=+167.516666794 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.099996 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:12:02 crc kubenswrapper[5107]: E0126 00:12:02.100558 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:02.600540439 +0000 UTC m=+167.518134785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-5hcgj" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.123382 5107 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.123451 5107 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.148416 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j26gs"] Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.193473 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j26gs"] Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.194119 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j26gs" Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.202653 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.205375 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.213938 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.228997 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bfr4w"] Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.309019 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2f8e393-1ed3-4475-bd0b-e0af8867a07a-utilities\") pod \"redhat-marketplace-j26gs\" (UID: \"b2f8e393-1ed3-4475-bd0b-e0af8867a07a\") " pod="openshift-marketplace/redhat-marketplace-j26gs" Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.309082 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrxjf\" (UniqueName: \"kubernetes.io/projected/b2f8e393-1ed3-4475-bd0b-e0af8867a07a-kube-api-access-zrxjf\") pod \"redhat-marketplace-j26gs\" (UID: \"b2f8e393-1ed3-4475-bd0b-e0af8867a07a\") " pod="openshift-marketplace/redhat-marketplace-j26gs" Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.309231 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.309397 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2f8e393-1ed3-4475-bd0b-e0af8867a07a-catalog-content\") pod \"redhat-marketplace-j26gs\" (UID: \"b2f8e393-1ed3-4475-bd0b-e0af8867a07a\") " pod="openshift-marketplace/redhat-marketplace-j26gs" Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.322727 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bh5dd"] Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.352523 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gbddn"] Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.365608 5107 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.365683 5107 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.393687 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-5hcgj\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.438547 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2f8e393-1ed3-4475-bd0b-e0af8867a07a-catalog-content\") pod \"redhat-marketplace-j26gs\" (UID: \"b2f8e393-1ed3-4475-bd0b-e0af8867a07a\") " pod="openshift-marketplace/redhat-marketplace-j26gs" Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.438804 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2f8e393-1ed3-4475-bd0b-e0af8867a07a-utilities\") pod \"redhat-marketplace-j26gs\" (UID: \"b2f8e393-1ed3-4475-bd0b-e0af8867a07a\") " pod="openshift-marketplace/redhat-marketplace-j26gs" Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.438864 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zrxjf\" (UniqueName: \"kubernetes.io/projected/b2f8e393-1ed3-4475-bd0b-e0af8867a07a-kube-api-access-zrxjf\") pod \"redhat-marketplace-j26gs\" (UID: \"b2f8e393-1ed3-4475-bd0b-e0af8867a07a\") " pod="openshift-marketplace/redhat-marketplace-j26gs" Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.440809 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2f8e393-1ed3-4475-bd0b-e0af8867a07a-catalog-content\") pod \"redhat-marketplace-j26gs\" (UID: \"b2f8e393-1ed3-4475-bd0b-e0af8867a07a\") " pod="openshift-marketplace/redhat-marketplace-j26gs" Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.441383 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2f8e393-1ed3-4475-bd0b-e0af8867a07a-utilities\") pod \"redhat-marketplace-j26gs\" (UID: \"b2f8e393-1ed3-4475-bd0b-e0af8867a07a\") " pod="openshift-marketplace/redhat-marketplace-j26gs" Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.480312 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrxjf\" (UniqueName: \"kubernetes.io/projected/b2f8e393-1ed3-4475-bd0b-e0af8867a07a-kube-api-access-zrxjf\") pod \"redhat-marketplace-j26gs\" (UID: \"b2f8e393-1ed3-4475-bd0b-e0af8867a07a\") " pod="openshift-marketplace/redhat-marketplace-j26gs" Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.521851 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zd5l8"] Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.522615 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vk79k"] Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.541137 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j26gs" Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.554175 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.802665 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mbr9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:02 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Jan 26 00:12:02 crc kubenswrapper[5107]: [+]process-running ok Jan 26 00:12:02 crc kubenswrapper[5107]: healthz check failed Jan 26 00:12:02 crc kubenswrapper[5107]: I0126 00:12:02.803342 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" podUID="80801f36-b03c-44af-bbaa-4e9a962f9a30" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:03 crc kubenswrapper[5107]: W0126 00:12:03.183987 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae7da3db_5cbd_40ff_adfb_417c0d055042.slice/crio-77d998845b4c71fcb32490bc1bffdda6ea959390a9b72e8e0e61cbd21aef863e WatchSource:0}: Error finding container 77d998845b4c71fcb32490bc1bffdda6ea959390a9b72e8e0e61cbd21aef863e: Status 404 returned error can't find the container with id 77d998845b4c71fcb32490bc1bffdda6ea959390a9b72e8e0e61cbd21aef863e Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.297840 5107 patch_prober.go:28] interesting pod/console-64d44f6ddf-4gmk9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.297982 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-4gmk9" podUID="1a7fcb0f-fb1d-41e2-b417-20b92ded1b6f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.314841 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jxbv4" Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.314943 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bh5dd" event={"ID":"d71d7360-3eef-4260-b288-7fc9f8d6fecc","Type":"ContainerStarted","Data":"32fb159b08fc542583ffac618284c6e4f511d17cb0b9fde87e51d9a9e18968bb"} Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.314989 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vk79k"] Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.315364 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vk79k" Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.315454 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.315498 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"56c0b45c-2648-462a-90aa-ebee1bb3358e","Type":"ContainerStarted","Data":"ce5f75a70249639abd50cfb8ac696a8410a19c3b041a1920a8df88e8b43d6cda"} Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.315546 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-5hcgj"] Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.315581 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zd5l8" event={"ID":"638ad5ba-8cd0-49f3-817d-eb8c75ecc863","Type":"ContainerStarted","Data":"73ca6e245f7a9d66718f51480359fd46212820a9f4f0f2cb0dd9a22f6303951c"} Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.315610 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"12644171-7711-41d1-9376-76515176916c","Type":"ContainerStarted","Data":"66316c458903fab5e2490d0c348abd8d4a49bd0a4cdef38bae735c7d2b27a0f2"} Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.315634 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbddn" event={"ID":"c0c7bec4-aeda-4946-9599-726d61c41d93","Type":"ContainerStarted","Data":"5d28069fcf50443e0e1eef8f7bcfd4529979305729d9ab516aa31881b651aa56"} Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.315662 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j26gs"] Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.315689 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7mhc8" event={"ID":"c53ce89a-3e31-41ac-96d2-c4326f044986","Type":"ContainerStarted","Data":"58173be59a5a1c7632deedca2d321bf02c61f7c06e7938aaf2666d0e41632774"} Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.315714 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfr4w" event={"ID":"1d8cc2bf-c61e-4f0a-9bee-068919e02489","Type":"ContainerStarted","Data":"5e960997a7e73fcdc0e599e98291f1239ecd90d5f9e187ebf70745e839fe22cb"} Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.317595 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2chhv"] Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.410546 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eedae47-54cd-438f-93d5-73b21a1fb540-utilities\") pod \"redhat-marketplace-vk79k\" (UID: \"8eedae47-54cd-438f-93d5-73b21a1fb540\") " pod="openshift-marketplace/redhat-marketplace-vk79k" Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.410633 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eedae47-54cd-438f-93d5-73b21a1fb540-catalog-content\") pod \"redhat-marketplace-vk79k\" (UID: \"8eedae47-54cd-438f-93d5-73b21a1fb540\") " pod="openshift-marketplace/redhat-marketplace-vk79k" Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.410676 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbczg\" (UniqueName: \"kubernetes.io/projected/8eedae47-54cd-438f-93d5-73b21a1fb540-kube-api-access-dbczg\") pod \"redhat-marketplace-vk79k\" (UID: \"8eedae47-54cd-438f-93d5-73b21a1fb540\") " pod="openshift-marketplace/redhat-marketplace-vk79k" Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.514068 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eedae47-54cd-438f-93d5-73b21a1fb540-utilities\") pod \"redhat-marketplace-vk79k\" (UID: \"8eedae47-54cd-438f-93d5-73b21a1fb540\") " pod="openshift-marketplace/redhat-marketplace-vk79k" Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.514156 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eedae47-54cd-438f-93d5-73b21a1fb540-utilities\") pod \"redhat-marketplace-vk79k\" (UID: \"8eedae47-54cd-438f-93d5-73b21a1fb540\") " pod="openshift-marketplace/redhat-marketplace-vk79k" Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.514262 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eedae47-54cd-438f-93d5-73b21a1fb540-catalog-content\") pod \"redhat-marketplace-vk79k\" (UID: \"8eedae47-54cd-438f-93d5-73b21a1fb540\") " pod="openshift-marketplace/redhat-marketplace-vk79k" Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.514638 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eedae47-54cd-438f-93d5-73b21a1fb540-catalog-content\") pod \"redhat-marketplace-vk79k\" (UID: \"8eedae47-54cd-438f-93d5-73b21a1fb540\") " pod="openshift-marketplace/redhat-marketplace-vk79k" Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.514721 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dbczg\" (UniqueName: \"kubernetes.io/projected/8eedae47-54cd-438f-93d5-73b21a1fb540-kube-api-access-dbczg\") pod \"redhat-marketplace-vk79k\" (UID: \"8eedae47-54cd-438f-93d5-73b21a1fb540\") " pod="openshift-marketplace/redhat-marketplace-vk79k" Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.643417 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbczg\" (UniqueName: \"kubernetes.io/projected/8eedae47-54cd-438f-93d5-73b21a1fb540-kube-api-access-dbczg\") pod \"redhat-marketplace-vk79k\" (UID: \"8eedae47-54cd-438f-93d5-73b21a1fb540\") " pod="openshift-marketplace/redhat-marketplace-vk79k" Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.675394 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vk79k" Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.740533 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mbr9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:03 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Jan 26 00:12:03 crc kubenswrapper[5107]: [+]process-running ok Jan 26 00:12:03 crc kubenswrapper[5107]: healthz check failed Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.740642 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" podUID="80801f36-b03c-44af-bbaa-4e9a962f9a30" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.747044 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-64rgr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.747123 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-64rgr" podUID="4498876a-5953-499f-aa71-6899b8529dcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.832111 5107 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-flbvs container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 00:12:03 crc kubenswrapper[5107]: [+]log ok Jan 26 00:12:03 crc kubenswrapper[5107]: [+]etcd ok Jan 26 00:12:03 crc kubenswrapper[5107]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 00:12:03 crc kubenswrapper[5107]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 00:12:03 crc kubenswrapper[5107]: [+]poststarthook/max-in-flight-filter ok Jan 26 00:12:03 crc kubenswrapper[5107]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 00:12:03 crc kubenswrapper[5107]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 26 00:12:03 crc kubenswrapper[5107]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 26 00:12:03 crc kubenswrapper[5107]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 26 00:12:03 crc kubenswrapper[5107]: [+]poststarthook/project.openshift.io-projectcache ok Jan 26 00:12:03 crc kubenswrapper[5107]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 26 00:12:03 crc kubenswrapper[5107]: [+]poststarthook/openshift.io-startinformers ok Jan 26 00:12:03 crc kubenswrapper[5107]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 26 00:12:03 crc kubenswrapper[5107]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 26 00:12:03 crc kubenswrapper[5107]: livez check failed Jan 26 00:12:03 crc kubenswrapper[5107]: I0126 00:12:03.832223 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" podUID="b18dee05-6423-4857-95c5-63d2a976e19f" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.124244 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-g5ptf" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.140637 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e75356d-8170-4619-9539-ea5e50c2b892-config-volume\") pod \"8e75356d-8170-4619-9539-ea5e50c2b892\" (UID: \"8e75356d-8170-4619-9539-ea5e50c2b892\") " Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.140772 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqg9b\" (UniqueName: \"kubernetes.io/projected/8e75356d-8170-4619-9539-ea5e50c2b892-kube-api-access-hqg9b\") pod \"8e75356d-8170-4619-9539-ea5e50c2b892\" (UID: \"8e75356d-8170-4619-9539-ea5e50c2b892\") " Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.141752 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e75356d-8170-4619-9539-ea5e50c2b892-config-volume" (OuterVolumeSpecName: "config-volume") pod "8e75356d-8170-4619-9539-ea5e50c2b892" (UID: "8e75356d-8170-4619-9539-ea5e50c2b892"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.141842 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e75356d-8170-4619-9539-ea5e50c2b892-secret-volume\") pod \"8e75356d-8170-4619-9539-ea5e50c2b892\" (UID: \"8e75356d-8170-4619-9539-ea5e50c2b892\") " Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.142634 5107 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e75356d-8170-4619-9539-ea5e50c2b892-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.161295 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=6.161274736 podStartE2EDuration="6.161274736s" podCreationTimestamp="2026-01-26 00:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:03.898050689 +0000 UTC m=+168.815645045" watchObservedRunningTime="2026-01-26 00:12:04.161274736 +0000 UTC m=+169.078869082" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.193763 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e75356d-8170-4619-9539-ea5e50c2b892-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8e75356d-8170-4619-9539-ea5e50c2b892" (UID: "8e75356d-8170-4619-9539-ea5e50c2b892"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.195637 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e75356d-8170-4619-9539-ea5e50c2b892-kube-api-access-hqg9b" (OuterVolumeSpecName: "kube-api-access-hqg9b") pod "8e75356d-8170-4619-9539-ea5e50c2b892" (UID: "8e75356d-8170-4619-9539-ea5e50c2b892"). InnerVolumeSpecName "kube-api-access-hqg9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.236781 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2chhv" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.240370 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.244602 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741-catalog-content\") pod \"redhat-operators-2chhv\" (UID: \"1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741\") " pod="openshift-marketplace/redhat-operators-2chhv" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.244651 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741-utilities\") pod \"redhat-operators-2chhv\" (UID: \"1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741\") " pod="openshift-marketplace/redhat-operators-2chhv" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.245131 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6c5t\" (UniqueName: \"kubernetes.io/projected/1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741-kube-api-access-k6c5t\") pod \"redhat-operators-2chhv\" (UID: \"1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741\") " pod="openshift-marketplace/redhat-operators-2chhv" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.245440 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hqg9b\" (UniqueName: \"kubernetes.io/projected/8e75356d-8170-4619-9539-ea5e50c2b892-kube-api-access-hqg9b\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.245459 5107 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8e75356d-8170-4619-9539-ea5e50c2b892-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.333706 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.338386 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2chhv"] Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.338510 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j26gs" event={"ID":"b2f8e393-1ed3-4475-bd0b-e0af8867a07a","Type":"ContainerStarted","Data":"3e5b1d06f0d9e62ef989468e5e4267088a933dc68084f463d493a086d3bcce78"} Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.338545 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" event={"ID":"ae7da3db-5cbd-40ff-adfb-417c0d055042","Type":"ContainerStarted","Data":"77d998845b4c71fcb32490bc1bffdda6ea959390a9b72e8e0e61cbd21aef863e"} Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.338573 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lrc58"] Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.339977 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8e75356d-8170-4619-9539-ea5e50c2b892" containerName="collect-profiles" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.340015 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e75356d-8170-4619-9539-ea5e50c2b892" containerName="collect-profiles" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.340277 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="8e75356d-8170-4619-9539-ea5e50c2b892" containerName="collect-profiles" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.346657 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k6c5t\" (UniqueName: \"kubernetes.io/projected/1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741-kube-api-access-k6c5t\") pod \"redhat-operators-2chhv\" (UID: \"1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741\") " pod="openshift-marketplace/redhat-operators-2chhv" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.348197 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741-catalog-content\") pod \"redhat-operators-2chhv\" (UID: \"1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741\") " pod="openshift-marketplace/redhat-operators-2chhv" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.348288 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741-utilities\") pod \"redhat-operators-2chhv\" (UID: \"1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741\") " pod="openshift-marketplace/redhat-operators-2chhv" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.350058 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741-utilities\") pod \"redhat-operators-2chhv\" (UID: \"1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741\") " pod="openshift-marketplace/redhat-operators-2chhv" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.350195 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741-catalog-content\") pod \"redhat-operators-2chhv\" (UID: \"1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741\") " pod="openshift-marketplace/redhat-operators-2chhv" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.460212 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lrc58"] Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.461772 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6c5t\" (UniqueName: \"kubernetes.io/projected/1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741-kube-api-access-k6c5t\") pod \"redhat-operators-2chhv\" (UID: \"1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741\") " pod="openshift-marketplace/redhat-operators-2chhv" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.461984 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lrc58" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.496348 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vk79k"] Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.583070 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b89f5a05-23c2-41e1-98b3-22ba5035191f-catalog-content\") pod \"redhat-operators-lrc58\" (UID: \"b89f5a05-23c2-41e1-98b3-22ba5035191f\") " pod="openshift-marketplace/redhat-operators-lrc58" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.583187 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqnzf\" (UniqueName: \"kubernetes.io/projected/b89f5a05-23c2-41e1-98b3-22ba5035191f-kube-api-access-gqnzf\") pod \"redhat-operators-lrc58\" (UID: \"b89f5a05-23c2-41e1-98b3-22ba5035191f\") " pod="openshift-marketplace/redhat-operators-lrc58" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.583335 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b89f5a05-23c2-41e1-98b3-22ba5035191f-utilities\") pod \"redhat-operators-lrc58\" (UID: \"b89f5a05-23c2-41e1-98b3-22ba5035191f\") " pod="openshift-marketplace/redhat-operators-lrc58" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.631111 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" Jan 26 00:12:04 crc kubenswrapper[5107]: W0126 00:12:04.671296 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8eedae47_54cd_438f_93d5_73b21a1fb540.slice/crio-d72e9cb11069ac532eb5a40f0cd485fd60063c4b07e8630ec5717a9b4d48f3c0 WatchSource:0}: Error finding container d72e9cb11069ac532eb5a40f0cd485fd60063c4b07e8630ec5717a9b4d48f3c0: Status 404 returned error can't find the container with id d72e9cb11069ac532eb5a40f0cd485fd60063c4b07e8630ec5717a9b4d48f3c0 Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.685980 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b89f5a05-23c2-41e1-98b3-22ba5035191f-utilities\") pod \"redhat-operators-lrc58\" (UID: \"b89f5a05-23c2-41e1-98b3-22ba5035191f\") " pod="openshift-marketplace/redhat-operators-lrc58" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.686389 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b89f5a05-23c2-41e1-98b3-22ba5035191f-catalog-content\") pod \"redhat-operators-lrc58\" (UID: \"b89f5a05-23c2-41e1-98b3-22ba5035191f\") " pod="openshift-marketplace/redhat-operators-lrc58" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.686427 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gqnzf\" (UniqueName: \"kubernetes.io/projected/b89f5a05-23c2-41e1-98b3-22ba5035191f-kube-api-access-gqnzf\") pod \"redhat-operators-lrc58\" (UID: \"b89f5a05-23c2-41e1-98b3-22ba5035191f\") " pod="openshift-marketplace/redhat-operators-lrc58" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.686942 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b89f5a05-23c2-41e1-98b3-22ba5035191f-utilities\") pod \"redhat-operators-lrc58\" (UID: \"b89f5a05-23c2-41e1-98b3-22ba5035191f\") " pod="openshift-marketplace/redhat-operators-lrc58" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.687643 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b89f5a05-23c2-41e1-98b3-22ba5035191f-catalog-content\") pod \"redhat-operators-lrc58\" (UID: \"b89f5a05-23c2-41e1-98b3-22ba5035191f\") " pod="openshift-marketplace/redhat-operators-lrc58" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.725301 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqnzf\" (UniqueName: \"kubernetes.io/projected/b89f5a05-23c2-41e1-98b3-22ba5035191f-kube-api-access-gqnzf\") pod \"redhat-operators-lrc58\" (UID: \"b89f5a05-23c2-41e1-98b3-22ba5035191f\") " pod="openshift-marketplace/redhat-operators-lrc58" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.855724 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mbr9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:04 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Jan 26 00:12:04 crc kubenswrapper[5107]: [+]process-running ok Jan 26 00:12:04 crc kubenswrapper[5107]: healthz check failed Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.855810 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" podUID="80801f36-b03c-44af-bbaa-4e9a962f9a30" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.894083 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w"] Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.902248 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-xqx9c"] Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.895074 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w" podUID="1dcc8c3a-74e3-404d-8f0f-cec0001cf476" containerName="route-controller-manager" containerID="cri-o://c93edab0e1b84ff6fe1bc7b000e105b7938a9e1b56a420096bde0c8c5f498109" gracePeriod=30 Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.967395 5107 generic.go:358] "Generic (PLEG): container finished" podID="12644171-7711-41d1-9376-76515176916c" containerID="66316c458903fab5e2490d0c348abd8d4a49bd0a4cdef38bae735c7d2b27a0f2" exitCode=0 Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.967478 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"12644171-7711-41d1-9376-76515176916c","Type":"ContainerDied","Data":"66316c458903fab5e2490d0c348abd8d4a49bd0a4cdef38bae735c7d2b27a0f2"} Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.970184 5107 generic.go:358] "Generic (PLEG): container finished" podID="1d8cc2bf-c61e-4f0a-9bee-068919e02489" containerID="2b5cef2d39ddcb877f318305057949c6adad31a6831afd0c1ef7e32cb5908114" exitCode=0 Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.970309 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfr4w" event={"ID":"1d8cc2bf-c61e-4f0a-9bee-068919e02489","Type":"ContainerDied","Data":"2b5cef2d39ddcb877f318305057949c6adad31a6831afd0c1ef7e32cb5908114"} Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.975541 5107 generic.go:358] "Generic (PLEG): container finished" podID="d71d7360-3eef-4260-b288-7fc9f8d6fecc" containerID="f085612c488a1a673ed80660d81861a573e540f227bbdfdcccbd78f6623f2558" exitCode=0 Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.975758 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bh5dd" event={"ID":"d71d7360-3eef-4260-b288-7fc9f8d6fecc","Type":"ContainerDied","Data":"f085612c488a1a673ed80660d81861a573e540f227bbdfdcccbd78f6623f2558"} Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.992732 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"56c0b45c-2648-462a-90aa-ebee1bb3358e","Type":"ContainerStarted","Data":"5f1c4c4b4b5fb2eb50873a02eb8f001a5581359650ad1129b5dd8e9bb92471d3"} Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.996558 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-g5ptf" event={"ID":"8e75356d-8170-4619-9539-ea5e50c2b892","Type":"ContainerDied","Data":"7adfe1f469e4ad9ecf95d86cb40bc94453e78257c4d6e5278be75771edf01805"} Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.996603 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7adfe1f469e4ad9ecf95d86cb40bc94453e78257c4d6e5278be75771edf01805" Jan 26 00:12:04 crc kubenswrapper[5107]: I0126 00:12:04.996718 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-g5ptf" Jan 26 00:12:05 crc kubenswrapper[5107]: I0126 00:12:05.001348 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vk79k" event={"ID":"8eedae47-54cd-438f-93d5-73b21a1fb540","Type":"ContainerStarted","Data":"d72e9cb11069ac532eb5a40f0cd485fd60063c4b07e8630ec5717a9b4d48f3c0"} Jan 26 00:12:05 crc kubenswrapper[5107]: I0126 00:12:05.001747 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" podUID="21ff8993-d52d-4dcc-a520-c1f46e8e1c6f" containerName="controller-manager" containerID="cri-o://f35ee95538134827987f7dee88a5ce9aa2a91555f1ffea857329a5fa6d6b8175" gracePeriod=30 Jan 26 00:12:05 crc kubenswrapper[5107]: I0126 00:12:05.054391 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=6.054366732 podStartE2EDuration="6.054366732s" podCreationTimestamp="2026-01-26 00:11:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:05.052681615 +0000 UTC m=+169.970275961" watchObservedRunningTime="2026-01-26 00:12:05.054366732 +0000 UTC m=+169.971961078" Jan 26 00:12:05 crc kubenswrapper[5107]: I0126 00:12:05.806391 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mbr9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:05 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Jan 26 00:12:05 crc kubenswrapper[5107]: [+]process-running ok Jan 26 00:12:05 crc kubenswrapper[5107]: healthz check failed Jan 26 00:12:05 crc kubenswrapper[5107]: I0126 00:12:05.806994 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" podUID="80801f36-b03c-44af-bbaa-4e9a962f9a30" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:05 crc kubenswrapper[5107]: I0126 00:12:05.931388 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2chhv" Jan 26 00:12:05 crc kubenswrapper[5107]: I0126 00:12:05.935295 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w" Jan 26 00:12:05 crc kubenswrapper[5107]: I0126 00:12:05.973328 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt"] Jan 26 00:12:05 crc kubenswrapper[5107]: I0126 00:12:05.974029 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1dcc8c3a-74e3-404d-8f0f-cec0001cf476" containerName="route-controller-manager" Jan 26 00:12:05 crc kubenswrapper[5107]: I0126 00:12:05.974044 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dcc8c3a-74e3-404d-8f0f-cec0001cf476" containerName="route-controller-manager" Jan 26 00:12:05 crc kubenswrapper[5107]: I0126 00:12:05.974152 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="1dcc8c3a-74e3-404d-8f0f-cec0001cf476" containerName="route-controller-manager" Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.010969 5107 generic.go:358] "Generic (PLEG): container finished" podID="1dcc8c3a-74e3-404d-8f0f-cec0001cf476" containerID="c93edab0e1b84ff6fe1bc7b000e105b7938a9e1b56a420096bde0c8c5f498109" exitCode=0 Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.013944 5107 generic.go:358] "Generic (PLEG): container finished" podID="21ff8993-d52d-4dcc-a520-c1f46e8e1c6f" containerID="f35ee95538134827987f7dee88a5ce9aa2a91555f1ffea857329a5fa6d6b8175" exitCode=0 Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.051911 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lrc58" Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.062425 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-tmp\") pod \"1dcc8c3a-74e3-404d-8f0f-cec0001cf476\" (UID: \"1dcc8c3a-74e3-404d-8f0f-cec0001cf476\") " Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.062606 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvlxh\" (UniqueName: \"kubernetes.io/projected/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-kube-api-access-bvlxh\") pod \"1dcc8c3a-74e3-404d-8f0f-cec0001cf476\" (UID: \"1dcc8c3a-74e3-404d-8f0f-cec0001cf476\") " Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.062679 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-config\") pod \"1dcc8c3a-74e3-404d-8f0f-cec0001cf476\" (UID: \"1dcc8c3a-74e3-404d-8f0f-cec0001cf476\") " Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.062721 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-serving-cert\") pod \"1dcc8c3a-74e3-404d-8f0f-cec0001cf476\" (UID: \"1dcc8c3a-74e3-404d-8f0f-cec0001cf476\") " Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.062758 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-client-ca\") pod \"1dcc8c3a-74e3-404d-8f0f-cec0001cf476\" (UID: \"1dcc8c3a-74e3-404d-8f0f-cec0001cf476\") " Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.064763 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-tmp" (OuterVolumeSpecName: "tmp") pod "1dcc8c3a-74e3-404d-8f0f-cec0001cf476" (UID: "1dcc8c3a-74e3-404d-8f0f-cec0001cf476"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.065091 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-client-ca" (OuterVolumeSpecName: "client-ca") pod "1dcc8c3a-74e3-404d-8f0f-cec0001cf476" (UID: "1dcc8c3a-74e3-404d-8f0f-cec0001cf476"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.067752 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-config" (OuterVolumeSpecName: "config") pod "1dcc8c3a-74e3-404d-8f0f-cec0001cf476" (UID: "1dcc8c3a-74e3-404d-8f0f-cec0001cf476"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.072656 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-kube-api-access-bvlxh" (OuterVolumeSpecName: "kube-api-access-bvlxh") pod "1dcc8c3a-74e3-404d-8f0f-cec0001cf476" (UID: "1dcc8c3a-74e3-404d-8f0f-cec0001cf476"). InnerVolumeSpecName "kube-api-access-bvlxh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.089203 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1dcc8c3a-74e3-404d-8f0f-cec0001cf476" (UID: "1dcc8c3a-74e3-404d-8f0f-cec0001cf476"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.165163 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bvlxh\" (UniqueName: \"kubernetes.io/projected/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-kube-api-access-bvlxh\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.165221 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.165233 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.165245 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.165263 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1dcc8c3a-74e3-404d-8f0f-cec0001cf476-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:06 crc kubenswrapper[5107]: W0126 00:12:06.570678 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb89f5a05_23c2_41e1_98b3_22ba5035191f.slice/crio-af992c4a8cb69cc4e1c03164c427346fef1cb300d461623ffc59763fe34d615f WatchSource:0}: Error finding container af992c4a8cb69cc4e1c03164c427346fef1cb300d461623ffc59763fe34d615f: Status 404 returned error can't find the container with id af992c4a8cb69cc4e1c03164c427346fef1cb300d461623ffc59763fe34d615f Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.731862 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wk2r6" Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.731989 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w" event={"ID":"1dcc8c3a-74e3-404d-8f0f-cec0001cf476","Type":"ContainerDied","Data":"c93edab0e1b84ff6fe1bc7b000e105b7938a9e1b56a420096bde0c8c5f498109"} Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.732029 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w" event={"ID":"1dcc8c3a-74e3-404d-8f0f-cec0001cf476","Type":"ContainerDied","Data":"656bbed1bfc0386c8b956b5989f88aadce22e0461cdd846f3ac9b434a35050cf"} Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.733247 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.734133 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w" Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.734994 5107 scope.go:117] "RemoveContainer" containerID="c93edab0e1b84ff6fe1bc7b000e105b7938a9e1b56a420096bde0c8c5f498109" Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.742179 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mbr9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:06 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Jan 26 00:12:06 crc kubenswrapper[5107]: [+]process-running ok Jan 26 00:12:06 crc kubenswrapper[5107]: healthz check failed Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.742273 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" podUID="80801f36-b03c-44af-bbaa-4e9a962f9a30" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.748309 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" event={"ID":"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f","Type":"ContainerDied","Data":"f35ee95538134827987f7dee88a5ce9aa2a91555f1ffea857329a5fa6d6b8175"} Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.748363 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt"] Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.748398 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2chhv"] Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.748420 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lrc58"] Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.879719 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07c24009-786a-4a05-8c86-b94337ce730e-serving-cert\") pod \"route-controller-manager-5f5646cf8b-9kgnt\" (UID: \"07c24009-786a-4a05-8c86-b94337ce730e\") " pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.880038 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07c24009-786a-4a05-8c86-b94337ce730e-client-ca\") pod \"route-controller-manager-5f5646cf8b-9kgnt\" (UID: \"07c24009-786a-4a05-8c86-b94337ce730e\") " pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.880148 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wrs6\" (UniqueName: \"kubernetes.io/projected/07c24009-786a-4a05-8c86-b94337ce730e-kube-api-access-8wrs6\") pod \"route-controller-manager-5f5646cf8b-9kgnt\" (UID: \"07c24009-786a-4a05-8c86-b94337ce730e\") " pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.880210 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/07c24009-786a-4a05-8c86-b94337ce730e-tmp\") pod \"route-controller-manager-5f5646cf8b-9kgnt\" (UID: \"07c24009-786a-4a05-8c86-b94337ce730e\") " pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.880259 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c24009-786a-4a05-8c86-b94337ce730e-config\") pod \"route-controller-manager-5f5646cf8b-9kgnt\" (UID: \"07c24009-786a-4a05-8c86-b94337ce730e\") " pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.981398 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8wrs6\" (UniqueName: \"kubernetes.io/projected/07c24009-786a-4a05-8c86-b94337ce730e-kube-api-access-8wrs6\") pod \"route-controller-manager-5f5646cf8b-9kgnt\" (UID: \"07c24009-786a-4a05-8c86-b94337ce730e\") " pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.981483 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/07c24009-786a-4a05-8c86-b94337ce730e-tmp\") pod \"route-controller-manager-5f5646cf8b-9kgnt\" (UID: \"07c24009-786a-4a05-8c86-b94337ce730e\") " pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.981518 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c24009-786a-4a05-8c86-b94337ce730e-config\") pod \"route-controller-manager-5f5646cf8b-9kgnt\" (UID: \"07c24009-786a-4a05-8c86-b94337ce730e\") " pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.981657 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07c24009-786a-4a05-8c86-b94337ce730e-serving-cert\") pod \"route-controller-manager-5f5646cf8b-9kgnt\" (UID: \"07c24009-786a-4a05-8c86-b94337ce730e\") " pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.981770 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07c24009-786a-4a05-8c86-b94337ce730e-client-ca\") pod \"route-controller-manager-5f5646cf8b-9kgnt\" (UID: \"07c24009-786a-4a05-8c86-b94337ce730e\") " pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.982951 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07c24009-786a-4a05-8c86-b94337ce730e-client-ca\") pod \"route-controller-manager-5f5646cf8b-9kgnt\" (UID: \"07c24009-786a-4a05-8c86-b94337ce730e\") " pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.983656 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/07c24009-786a-4a05-8c86-b94337ce730e-tmp\") pod \"route-controller-manager-5f5646cf8b-9kgnt\" (UID: \"07c24009-786a-4a05-8c86-b94337ce730e\") " pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.984572 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c24009-786a-4a05-8c86-b94337ce730e-config\") pod \"route-controller-manager-5f5646cf8b-9kgnt\" (UID: \"07c24009-786a-4a05-8c86-b94337ce730e\") " pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" Jan 26 00:12:06 crc kubenswrapper[5107]: I0126 00:12:06.991823 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07c24009-786a-4a05-8c86-b94337ce730e-serving-cert\") pod \"route-controller-manager-5f5646cf8b-9kgnt\" (UID: \"07c24009-786a-4a05-8c86-b94337ce730e\") " pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" Jan 26 00:12:07 crc kubenswrapper[5107]: I0126 00:12:07.009629 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wrs6\" (UniqueName: \"kubernetes.io/projected/07c24009-786a-4a05-8c86-b94337ce730e-kube-api-access-8wrs6\") pod \"route-controller-manager-5f5646cf8b-9kgnt\" (UID: \"07c24009-786a-4a05-8c86-b94337ce730e\") " pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" Jan 26 00:12:07 crc kubenswrapper[5107]: I0126 00:12:07.035574 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7mhc8" event={"ID":"c53ce89a-3e31-41ac-96d2-c4326f044986","Type":"ContainerStarted","Data":"8ba6bc88709cba30b307c94278c1ebd314dede097e51d754daa175c7a21d60e9"} Jan 26 00:12:07 crc kubenswrapper[5107]: I0126 00:12:07.049407 5107 generic.go:358] "Generic (PLEG): container finished" podID="56c0b45c-2648-462a-90aa-ebee1bb3358e" containerID="5f1c4c4b4b5fb2eb50873a02eb8f001a5581359650ad1129b5dd8e9bb92471d3" exitCode=0 Jan 26 00:12:07 crc kubenswrapper[5107]: I0126 00:12:07.049591 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"56c0b45c-2648-462a-90aa-ebee1bb3358e","Type":"ContainerDied","Data":"5f1c4c4b4b5fb2eb50873a02eb8f001a5581359650ad1129b5dd8e9bb92471d3"} Jan 26 00:12:07 crc kubenswrapper[5107]: I0126 00:12:07.052200 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2chhv" event={"ID":"1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741","Type":"ContainerStarted","Data":"bc2d48e2bcd1d84bcbb6a63e1072958f4152c09cd788ca8223267d8dd8006290"} Jan 26 00:12:07 crc kubenswrapper[5107]: I0126 00:12:07.057537 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vk79k" event={"ID":"8eedae47-54cd-438f-93d5-73b21a1fb540","Type":"ContainerStarted","Data":"e45ce3e8de11079c80369688511d164b5b1f995f0e203ff2a4ac5615bb259b19"} Jan 26 00:12:07 crc kubenswrapper[5107]: I0126 00:12:07.068020 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j26gs" event={"ID":"b2f8e393-1ed3-4475-bd0b-e0af8867a07a","Type":"ContainerStarted","Data":"382a98cd9a0790fc6419b0a60953c99d5126f685c796d293a38b6c8871715e39"} Jan 26 00:12:07 crc kubenswrapper[5107]: I0126 00:12:07.075822 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" event={"ID":"ae7da3db-5cbd-40ff-adfb-417c0d055042","Type":"ContainerStarted","Data":"31c05d3c96b7c6ff6439ca5881a48f83b8d1dfb9fd7b74ab245de1b77dbbce8a"} Jan 26 00:12:07 crc kubenswrapper[5107]: I0126 00:12:07.076951 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:12:07 crc kubenswrapper[5107]: I0126 00:12:07.088238 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zd5l8" event={"ID":"638ad5ba-8cd0-49f3-817d-eb8c75ecc863","Type":"ContainerStarted","Data":"c24fd2a96fbda832e078de2f17825928ccd2a840c6e8654990aeb7ce9549f1c6"} Jan 26 00:12:07 crc kubenswrapper[5107]: I0126 00:12:07.104788 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-7mhc8" podStartSLOduration=27.104764369 podStartE2EDuration="27.104764369s" podCreationTimestamp="2026-01-26 00:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:07.062264055 +0000 UTC m=+171.979858421" watchObservedRunningTime="2026-01-26 00:12:07.104764369 +0000 UTC m=+172.022358725" Jan 26 00:12:07 crc kubenswrapper[5107]: I0126 00:12:07.110231 5107 scope.go:117] "RemoveContainer" containerID="c93edab0e1b84ff6fe1bc7b000e105b7938a9e1b56a420096bde0c8c5f498109" Jan 26 00:12:07 crc kubenswrapper[5107]: E0126 00:12:07.111025 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c93edab0e1b84ff6fe1bc7b000e105b7938a9e1b56a420096bde0c8c5f498109\": container with ID starting with c93edab0e1b84ff6fe1bc7b000e105b7938a9e1b56a420096bde0c8c5f498109 not found: ID does not exist" containerID="c93edab0e1b84ff6fe1bc7b000e105b7938a9e1b56a420096bde0c8c5f498109" Jan 26 00:12:07 crc kubenswrapper[5107]: I0126 00:12:07.111056 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c93edab0e1b84ff6fe1bc7b000e105b7938a9e1b56a420096bde0c8c5f498109"} err="failed to get container status \"c93edab0e1b84ff6fe1bc7b000e105b7938a9e1b56a420096bde0c8c5f498109\": rpc error: code = NotFound desc = could not find container \"c93edab0e1b84ff6fe1bc7b000e105b7938a9e1b56a420096bde0c8c5f498109\": container with ID starting with c93edab0e1b84ff6fe1bc7b000e105b7938a9e1b56a420096bde0c8c5f498109 not found: ID does not exist" Jan 26 00:12:07 crc kubenswrapper[5107]: I0126 00:12:07.111699 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrc58" event={"ID":"b89f5a05-23c2-41e1-98b3-22ba5035191f","Type":"ContainerStarted","Data":"af992c4a8cb69cc4e1c03164c427346fef1cb300d461623ffc59763fe34d615f"} Jan 26 00:12:07 crc kubenswrapper[5107]: I0126 00:12:07.113619 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbddn" event={"ID":"c0c7bec4-aeda-4946-9599-726d61c41d93","Type":"ContainerStarted","Data":"b1054e7d7f63c5344d78a93a8aac5d7206bfc6e3b4ecff76fe6c647e31d5adcb"} Jan 26 00:12:07 crc kubenswrapper[5107]: I0126 00:12:07.326844 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" podStartSLOduration=146.326823668 podStartE2EDuration="2m26.326823668s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:07.32581345 +0000 UTC m=+172.243407816" watchObservedRunningTime="2026-01-26 00:12:07.326823668 +0000 UTC m=+172.244418014" Jan 26 00:12:07 crc kubenswrapper[5107]: I0126 00:12:07.819180 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mbr9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:07 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Jan 26 00:12:07 crc kubenswrapper[5107]: [+]process-running ok Jan 26 00:12:07 crc kubenswrapper[5107]: healthz check failed Jan 26 00:12:07 crc kubenswrapper[5107]: I0126 00:12:07.819576 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" podUID="80801f36-b03c-44af-bbaa-4e9a962f9a30" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:07 crc kubenswrapper[5107]: I0126 00:12:07.967077 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" Jan 26 00:12:07 crc kubenswrapper[5107]: I0126 00:12:07.970182 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.005489 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b595d7598-z5cmg"] Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.006540 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="21ff8993-d52d-4dcc-a520-c1f46e8e1c6f" containerName="controller-manager" Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.006559 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ff8993-d52d-4dcc-a520-c1f46e8e1c6f" containerName="controller-manager" Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.006836 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="21ff8993-d52d-4dcc-a520-c1f46e8e1c6f" containerName="controller-manager" Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.012442 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.020252 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-tmp\") pod \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\" (UID: \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\") " Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.020443 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-serving-cert\") pod \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\" (UID: \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\") " Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.020499 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-proxy-ca-bundles\") pod \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\" (UID: \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\") " Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.020576 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12644171-7711-41d1-9376-76515176916c-kube-api-access\") pod \"12644171-7711-41d1-9376-76515176916c\" (UID: \"12644171-7711-41d1-9376-76515176916c\") " Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.020713 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/12644171-7711-41d1-9376-76515176916c-kubelet-dir\") pod \"12644171-7711-41d1-9376-76515176916c\" (UID: \"12644171-7711-41d1-9376-76515176916c\") " Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.020747 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-config\") pod \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\" (UID: \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\") " Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.020814 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-client-ca\") pod \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\" (UID: \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\") " Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.020956 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lj8h\" (UniqueName: \"kubernetes.io/projected/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-kube-api-access-7lj8h\") pod \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\" (UID: \"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f\") " Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.021204 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12644171-7711-41d1-9376-76515176916c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "12644171-7711-41d1-9376-76515176916c" (UID: "12644171-7711-41d1-9376-76515176916c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.021721 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-tmp" (OuterVolumeSpecName: "tmp") pod "21ff8993-d52d-4dcc-a520-c1f46e8e1c6f" (UID: "21ff8993-d52d-4dcc-a520-c1f46e8e1c6f"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.022119 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "21ff8993-d52d-4dcc-a520-c1f46e8e1c6f" (UID: "21ff8993-d52d-4dcc-a520-c1f46e8e1c6f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.022108 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-client-ca" (OuterVolumeSpecName: "client-ca") pod "21ff8993-d52d-4dcc-a520-c1f46e8e1c6f" (UID: "21ff8993-d52d-4dcc-a520-c1f46e8e1c6f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.022697 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-config" (OuterVolumeSpecName: "config") pod "21ff8993-d52d-4dcc-a520-c1f46e8e1c6f" (UID: "21ff8993-d52d-4dcc-a520-c1f46e8e1c6f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.031919 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.031963 5107 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.031975 5107 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/12644171-7711-41d1-9376-76515176916c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.031986 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.031995 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.085960 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "21ff8993-d52d-4dcc-a520-c1f46e8e1c6f" (UID: "21ff8993-d52d-4dcc-a520-c1f46e8e1c6f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.091519 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12644171-7711-41d1-9376-76515176916c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "12644171-7711-41d1-9376-76515176916c" (UID: "12644171-7711-41d1-9376-76515176916c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.097296 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-kube-api-access-7lj8h" (OuterVolumeSpecName: "kube-api-access-7lj8h") pod "21ff8993-d52d-4dcc-a520-c1f46e8e1c6f" (UID: "21ff8993-d52d-4dcc-a520-c1f46e8e1c6f"). InnerVolumeSpecName "kube-api-access-7lj8h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.123069 5107 generic.go:358] "Generic (PLEG): container finished" podID="8eedae47-54cd-438f-93d5-73b21a1fb540" containerID="e45ce3e8de11079c80369688511d164b5b1f995f0e203ff2a4ac5615bb259b19" exitCode=0 Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.125939 5107 generic.go:358] "Generic (PLEG): container finished" podID="b2f8e393-1ed3-4475-bd0b-e0af8867a07a" containerID="382a98cd9a0790fc6419b0a60953c99d5126f685c796d293a38b6c8871715e39" exitCode=0 Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.129752 5107 generic.go:358] "Generic (PLEG): container finished" podID="638ad5ba-8cd0-49f3-817d-eb8c75ecc863" containerID="c24fd2a96fbda832e078de2f17825928ccd2a840c6e8654990aeb7ce9549f1c6" exitCode=0 Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.133874 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7lj8h\" (UniqueName: \"kubernetes.io/projected/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-kube-api-access-7lj8h\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.133937 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.133950 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12644171-7711-41d1-9376-76515176916c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.135841 5107 generic.go:358] "Generic (PLEG): container finished" podID="c0c7bec4-aeda-4946-9599-726d61c41d93" containerID="b1054e7d7f63c5344d78a93a8aac5d7206bfc6e3b4ecff76fe6c647e31d5adcb" exitCode=0 Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.777019 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mbr9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:08 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Jan 26 00:12:08 crc kubenswrapper[5107]: [+]process-running ok Jan 26 00:12:08 crc kubenswrapper[5107]: healthz check failed Jan 26 00:12:08 crc kubenswrapper[5107]: I0126 00:12:08.777298 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" podUID="80801f36-b03c-44af-bbaa-4e9a962f9a30" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.170200 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.175755 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.176440 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.185474 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eea6e4fb-6302-4136-827b-04387e5f119f-client-ca\") pod \"controller-manager-5b595d7598-z5cmg\" (UID: \"eea6e4fb-6302-4136-827b-04387e5f119f\") " pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.185563 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eea6e4fb-6302-4136-827b-04387e5f119f-proxy-ca-bundles\") pod \"controller-manager-5b595d7598-z5cmg\" (UID: \"eea6e4fb-6302-4136-827b-04387e5f119f\") " pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.185679 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4hbq\" (UniqueName: \"kubernetes.io/projected/eea6e4fb-6302-4136-827b-04387e5f119f-kube-api-access-l4hbq\") pod \"controller-manager-5b595d7598-z5cmg\" (UID: \"eea6e4fb-6302-4136-827b-04387e5f119f\") " pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.185781 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eea6e4fb-6302-4136-827b-04387e5f119f-tmp\") pod \"controller-manager-5b595d7598-z5cmg\" (UID: \"eea6e4fb-6302-4136-827b-04387e5f119f\") " pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.185861 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eea6e4fb-6302-4136-827b-04387e5f119f-serving-cert\") pod \"controller-manager-5b595d7598-z5cmg\" (UID: \"eea6e4fb-6302-4136-827b-04387e5f119f\") " pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.186000 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eea6e4fb-6302-4136-827b-04387e5f119f-config\") pod \"controller-manager-5b595d7598-z5cmg\" (UID: \"eea6e4fb-6302-4136-827b-04387e5f119f\") " pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.218276 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vk79k" event={"ID":"8eedae47-54cd-438f-93d5-73b21a1fb540","Type":"ContainerDied","Data":"e45ce3e8de11079c80369688511d164b5b1f995f0e203ff2a4ac5615bb259b19"} Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.218715 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.218826 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b595d7598-z5cmg"] Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.218939 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j26gs" event={"ID":"b2f8e393-1ed3-4475-bd0b-e0af8867a07a","Type":"ContainerDied","Data":"382a98cd9a0790fc6419b0a60953c99d5126f685c796d293a38b6c8871715e39"} Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.219041 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt"] Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.219127 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zd5l8" event={"ID":"638ad5ba-8cd0-49f3-817d-eb8c75ecc863","Type":"ContainerDied","Data":"c24fd2a96fbda832e078de2f17825928ccd2a840c6e8654990aeb7ce9549f1c6"} Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.219198 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"12644171-7711-41d1-9376-76515176916c","Type":"ContainerDied","Data":"3e5c2393623883e42dd6075d45431ac8be057fdc09415ea96756da6f907d7e81"} Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.219265 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e5c2393623883e42dd6075d45431ac8be057fdc09415ea96756da6f907d7e81" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.219787 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbddn" event={"ID":"c0c7bec4-aeda-4946-9599-726d61c41d93","Type":"ContainerDied","Data":"b1054e7d7f63c5344d78a93a8aac5d7206bfc6e3b4ecff76fe6c647e31d5adcb"} Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.219875 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-xqx9c" event={"ID":"21ff8993-d52d-4dcc-a520-c1f46e8e1c6f","Type":"ContainerDied","Data":"ed2084bc9870c1310826629d82c0aa3474379e557f3750fca2d8062ee2e2f3a6"} Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.219968 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" event={"ID":"07c24009-786a-4a05-8c86-b94337ce730e","Type":"ContainerStarted","Data":"cf1b46e6d992ca87c41fa348a36348f28852bac86f92eeffd7792c2a965fa386"} Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.220043 5107 scope.go:117] "RemoveContainer" containerID="f35ee95538134827987f7dee88a5ce9aa2a91555f1ffea857329a5fa6d6b8175" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.230862 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-flbvs" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.290477 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l4hbq\" (UniqueName: \"kubernetes.io/projected/eea6e4fb-6302-4136-827b-04387e5f119f-kube-api-access-l4hbq\") pod \"controller-manager-5b595d7598-z5cmg\" (UID: \"eea6e4fb-6302-4136-827b-04387e5f119f\") " pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.292873 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eea6e4fb-6302-4136-827b-04387e5f119f-tmp\") pod \"controller-manager-5b595d7598-z5cmg\" (UID: \"eea6e4fb-6302-4136-827b-04387e5f119f\") " pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.293858 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eea6e4fb-6302-4136-827b-04387e5f119f-tmp\") pod \"controller-manager-5b595d7598-z5cmg\" (UID: \"eea6e4fb-6302-4136-827b-04387e5f119f\") " pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.294103 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eea6e4fb-6302-4136-827b-04387e5f119f-serving-cert\") pod \"controller-manager-5b595d7598-z5cmg\" (UID: \"eea6e4fb-6302-4136-827b-04387e5f119f\") " pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.294338 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eea6e4fb-6302-4136-827b-04387e5f119f-config\") pod \"controller-manager-5b595d7598-z5cmg\" (UID: \"eea6e4fb-6302-4136-827b-04387e5f119f\") " pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.294410 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eea6e4fb-6302-4136-827b-04387e5f119f-client-ca\") pod \"controller-manager-5b595d7598-z5cmg\" (UID: \"eea6e4fb-6302-4136-827b-04387e5f119f\") " pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.304835 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eea6e4fb-6302-4136-827b-04387e5f119f-proxy-ca-bundles\") pod \"controller-manager-5b595d7598-z5cmg\" (UID: \"eea6e4fb-6302-4136-827b-04387e5f119f\") " pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.307540 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eea6e4fb-6302-4136-827b-04387e5f119f-proxy-ca-bundles\") pod \"controller-manager-5b595d7598-z5cmg\" (UID: \"eea6e4fb-6302-4136-827b-04387e5f119f\") " pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.312575 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eea6e4fb-6302-4136-827b-04387e5f119f-serving-cert\") pod \"controller-manager-5b595d7598-z5cmg\" (UID: \"eea6e4fb-6302-4136-827b-04387e5f119f\") " pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.314319 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eea6e4fb-6302-4136-827b-04387e5f119f-config\") pod \"controller-manager-5b595d7598-z5cmg\" (UID: \"eea6e4fb-6302-4136-827b-04387e5f119f\") " pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.315012 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eea6e4fb-6302-4136-827b-04387e5f119f-client-ca\") pod \"controller-manager-5b595d7598-z5cmg\" (UID: \"eea6e4fb-6302-4136-827b-04387e5f119f\") " pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.354241 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4hbq\" (UniqueName: \"kubernetes.io/projected/eea6e4fb-6302-4136-827b-04387e5f119f-kube-api-access-l4hbq\") pod \"controller-manager-5b595d7598-z5cmg\" (UID: \"eea6e4fb-6302-4136-827b-04387e5f119f\") " pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.485367 5107 ???:1] "http: TLS handshake error from 192.168.126.11:47958: no serving certificate available for the kubelet" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.497537 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.552005 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-xqx9c"] Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.557663 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-xqx9c"] Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.740625 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mbr9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:09 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Jan 26 00:12:09 crc kubenswrapper[5107]: [+]process-running ok Jan 26 00:12:09 crc kubenswrapper[5107]: healthz check failed Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.741044 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" podUID="80801f36-b03c-44af-bbaa-4e9a962f9a30" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.775019 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-64rgr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.775107 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-64rgr" podUID="4498876a-5953-499f-aa71-6899b8529dcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.798537 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.920509 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56c0b45c-2648-462a-90aa-ebee1bb3358e-kube-api-access\") pod \"56c0b45c-2648-462a-90aa-ebee1bb3358e\" (UID: \"56c0b45c-2648-462a-90aa-ebee1bb3358e\") " Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.920574 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56c0b45c-2648-462a-90aa-ebee1bb3358e-kubelet-dir\") pod \"56c0b45c-2648-462a-90aa-ebee1bb3358e\" (UID: \"56c0b45c-2648-462a-90aa-ebee1bb3358e\") " Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.920941 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c0b45c-2648-462a-90aa-ebee1bb3358e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "56c0b45c-2648-462a-90aa-ebee1bb3358e" (UID: "56c0b45c-2648-462a-90aa-ebee1bb3358e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:12:09 crc kubenswrapper[5107]: I0126 00:12:09.933158 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56c0b45c-2648-462a-90aa-ebee1bb3358e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "56c0b45c-2648-462a-90aa-ebee1bb3358e" (UID: "56c0b45c-2648-462a-90aa-ebee1bb3358e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:12:10 crc kubenswrapper[5107]: I0126 00:12:10.008287 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b595d7598-z5cmg"] Jan 26 00:12:10 crc kubenswrapper[5107]: W0126 00:12:10.022707 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeea6e4fb_6302_4136_827b_04387e5f119f.slice/crio-622fcb45c49b5d9113762b1e87f1e530772804ffc8e6dcd47b831fdb45c4d07a WatchSource:0}: Error finding container 622fcb45c49b5d9113762b1e87f1e530772804ffc8e6dcd47b831fdb45c4d07a: Status 404 returned error can't find the container with id 622fcb45c49b5d9113762b1e87f1e530772804ffc8e6dcd47b831fdb45c4d07a Jan 26 00:12:10 crc kubenswrapper[5107]: I0126 00:12:10.022747 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56c0b45c-2648-462a-90aa-ebee1bb3358e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:10 crc kubenswrapper[5107]: I0126 00:12:10.022777 5107 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56c0b45c-2648-462a-90aa-ebee1bb3358e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:10 crc kubenswrapper[5107]: I0126 00:12:10.122056 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21ff8993-d52d-4dcc-a520-c1f46e8e1c6f" path="/var/lib/kubelet/pods/21ff8993-d52d-4dcc-a520-c1f46e8e1c6f/volumes" Jan 26 00:12:10 crc kubenswrapper[5107]: I0126 00:12:10.234648 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"56c0b45c-2648-462a-90aa-ebee1bb3358e","Type":"ContainerDied","Data":"ce5f75a70249639abd50cfb8ac696a8410a19c3b041a1920a8df88e8b43d6cda"} Jan 26 00:12:10 crc kubenswrapper[5107]: I0126 00:12:10.235110 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce5f75a70249639abd50cfb8ac696a8410a19c3b041a1920a8df88e8b43d6cda" Jan 26 00:12:10 crc kubenswrapper[5107]: I0126 00:12:10.235310 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:12:10 crc kubenswrapper[5107]: I0126 00:12:10.236765 5107 generic.go:358] "Generic (PLEG): container finished" podID="1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741" containerID="e343434912061ab767cd5f0070acdd3eb8f610c4de32bd4adf42cebed94202be" exitCode=0 Jan 26 00:12:10 crc kubenswrapper[5107]: I0126 00:12:10.236984 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2chhv" event={"ID":"1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741","Type":"ContainerDied","Data":"e343434912061ab767cd5f0070acdd3eb8f610c4de32bd4adf42cebed94202be"} Jan 26 00:12:10 crc kubenswrapper[5107]: I0126 00:12:10.244145 5107 generic.go:358] "Generic (PLEG): container finished" podID="b89f5a05-23c2-41e1-98b3-22ba5035191f" containerID="41f37484c14c6adb45db4c8392fa438acd69206f1bb007e3285c7bb2b3aebb4a" exitCode=0 Jan 26 00:12:10 crc kubenswrapper[5107]: I0126 00:12:10.244270 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrc58" event={"ID":"b89f5a05-23c2-41e1-98b3-22ba5035191f","Type":"ContainerDied","Data":"41f37484c14c6adb45db4c8392fa438acd69206f1bb007e3285c7bb2b3aebb4a"} Jan 26 00:12:10 crc kubenswrapper[5107]: I0126 00:12:10.247820 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" event={"ID":"eea6e4fb-6302-4136-827b-04387e5f119f","Type":"ContainerStarted","Data":"622fcb45c49b5d9113762b1e87f1e530772804ffc8e6dcd47b831fdb45c4d07a"} Jan 26 00:12:10 crc kubenswrapper[5107]: I0126 00:12:10.741670 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mbr9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:10 crc kubenswrapper[5107]: [+]has-synced ok Jan 26 00:12:10 crc kubenswrapper[5107]: [+]process-running ok Jan 26 00:12:10 crc kubenswrapper[5107]: healthz check failed Jan 26 00:12:10 crc kubenswrapper[5107]: I0126 00:12:10.741734 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" podUID="80801f36-b03c-44af-bbaa-4e9a962f9a30" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:11 crc kubenswrapper[5107]: I0126 00:12:11.257974 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" event={"ID":"07c24009-786a-4a05-8c86-b94337ce730e","Type":"ContainerStarted","Data":"83342b2fc10cf08d21d8b62c002d08545ae906c4eab0d6d5bf42896574759c57"} Jan 26 00:12:11 crc kubenswrapper[5107]: I0126 00:12:11.258335 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" Jan 26 00:12:11 crc kubenswrapper[5107]: I0126 00:12:11.264349 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" event={"ID":"eea6e4fb-6302-4136-827b-04387e5f119f","Type":"ContainerStarted","Data":"7b3fa7e9d33637c6761528b5de33f638b477b0350be01f08115e70dbd389396f"} Jan 26 00:12:11 crc kubenswrapper[5107]: I0126 00:12:11.265439 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" Jan 26 00:12:11 crc kubenswrapper[5107]: I0126 00:12:11.267019 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" Jan 26 00:12:11 crc kubenswrapper[5107]: I0126 00:12:11.267466 5107 patch_prober.go:28] interesting pod/controller-manager-5b595d7598-z5cmg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" start-of-body= Jan 26 00:12:11 crc kubenswrapper[5107]: I0126 00:12:11.267525 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" podUID="eea6e4fb-6302-4136-827b-04387e5f119f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" Jan 26 00:12:11 crc kubenswrapper[5107]: I0126 00:12:11.308363 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" podStartSLOduration=7.3083290309999995 podStartE2EDuration="7.308329031s" podCreationTimestamp="2026-01-26 00:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:11.306925802 +0000 UTC m=+176.224520168" watchObservedRunningTime="2026-01-26 00:12:11.308329031 +0000 UTC m=+176.225923377" Jan 26 00:12:11 crc kubenswrapper[5107]: I0126 00:12:11.311814 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" podStartSLOduration=6.311796619 podStartE2EDuration="6.311796619s" podCreationTimestamp="2026-01-26 00:12:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:11.283675909 +0000 UTC m=+176.201270255" watchObservedRunningTime="2026-01-26 00:12:11.311796619 +0000 UTC m=+176.229390965" Jan 26 00:12:11 crc kubenswrapper[5107]: E0126 00:12:11.332768 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:11 crc kubenswrapper[5107]: E0126 00:12:11.352756 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:11 crc kubenswrapper[5107]: E0126 00:12:11.368472 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:11 crc kubenswrapper[5107]: E0126 00:12:11.368566 5107 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" podUID="568b36ce-cb38-401e-afc3-3c6e518c9c1a" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 26 00:12:11 crc kubenswrapper[5107]: I0126 00:12:11.739332 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" Jan 26 00:12:11 crc kubenswrapper[5107]: I0126 00:12:11.741709 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" Jan 26 00:12:12 crc kubenswrapper[5107]: I0126 00:12:12.281750 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" Jan 26 00:12:13 crc kubenswrapper[5107]: I0126 00:12:13.476548 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-4gmk9" Jan 26 00:12:13 crc kubenswrapper[5107]: I0126 00:12:13.483388 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-4gmk9" Jan 26 00:12:13 crc kubenswrapper[5107]: I0126 00:12:13.750014 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-64rgr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 00:12:13 crc kubenswrapper[5107]: I0126 00:12:13.750120 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-64rgr" podUID="4498876a-5953-499f-aa71-6899b8529dcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 00:12:13 crc kubenswrapper[5107]: I0126 00:12:13.750194 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-64rgr" Jan 26 00:12:13 crc kubenswrapper[5107]: I0126 00:12:13.751110 5107 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"5041f82636ad9985627f247050509b672b7374047431fb966605ec3ca0acfb7d"} pod="openshift-console/downloads-747b44746d-64rgr" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 26 00:12:13 crc kubenswrapper[5107]: I0126 00:12:13.751161 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-64rgr" podUID="4498876a-5953-499f-aa71-6899b8529dcf" containerName="download-server" containerID="cri-o://5041f82636ad9985627f247050509b672b7374047431fb966605ec3ca0acfb7d" gracePeriod=2 Jan 26 00:12:13 crc kubenswrapper[5107]: I0126 00:12:13.751545 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-64rgr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 00:12:13 crc kubenswrapper[5107]: I0126 00:12:13.751630 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-64rgr" podUID="4498876a-5953-499f-aa71-6899b8529dcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 00:12:17 crc kubenswrapper[5107]: I0126 00:12:17.094077 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b595d7598-z5cmg"] Jan 26 00:12:17 crc kubenswrapper[5107]: I0126 00:12:17.095023 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" podUID="eea6e4fb-6302-4136-827b-04387e5f119f" containerName="controller-manager" containerID="cri-o://7b3fa7e9d33637c6761528b5de33f638b477b0350be01f08115e70dbd389396f" gracePeriod=30 Jan 26 00:12:17 crc kubenswrapper[5107]: I0126 00:12:17.135628 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt"] Jan 26 00:12:17 crc kubenswrapper[5107]: I0126 00:12:17.135976 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" podUID="07c24009-786a-4a05-8c86-b94337ce730e" containerName="route-controller-manager" containerID="cri-o://83342b2fc10cf08d21d8b62c002d08545ae906c4eab0d6d5bf42896574759c57" gracePeriod=30 Jan 26 00:12:20 crc kubenswrapper[5107]: I0126 00:12:20.331702 5107 generic.go:358] "Generic (PLEG): container finished" podID="4498876a-5953-499f-aa71-6899b8529dcf" containerID="5041f82636ad9985627f247050509b672b7374047431fb966605ec3ca0acfb7d" exitCode=0 Jan 26 00:12:20 crc kubenswrapper[5107]: I0126 00:12:20.331790 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-64rgr" event={"ID":"4498876a-5953-499f-aa71-6899b8529dcf","Type":"ContainerDied","Data":"5041f82636ad9985627f247050509b672b7374047431fb966605ec3ca0acfb7d"} Jan 26 00:12:20 crc kubenswrapper[5107]: I0126 00:12:20.336117 5107 generic.go:358] "Generic (PLEG): container finished" podID="07c24009-786a-4a05-8c86-b94337ce730e" containerID="83342b2fc10cf08d21d8b62c002d08545ae906c4eab0d6d5bf42896574759c57" exitCode=0 Jan 26 00:12:20 crc kubenswrapper[5107]: I0126 00:12:20.336211 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" event={"ID":"07c24009-786a-4a05-8c86-b94337ce730e","Type":"ContainerDied","Data":"83342b2fc10cf08d21d8b62c002d08545ae906c4eab0d6d5bf42896574759c57"} Jan 26 00:12:20 crc kubenswrapper[5107]: I0126 00:12:20.337725 5107 generic.go:358] "Generic (PLEG): container finished" podID="eea6e4fb-6302-4136-827b-04387e5f119f" containerID="7b3fa7e9d33637c6761528b5de33f638b477b0350be01f08115e70dbd389396f" exitCode=0 Jan 26 00:12:20 crc kubenswrapper[5107]: I0126 00:12:20.337842 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" event={"ID":"eea6e4fb-6302-4136-827b-04387e5f119f","Type":"ContainerDied","Data":"7b3fa7e9d33637c6761528b5de33f638b477b0350be01f08115e70dbd389396f"} Jan 26 00:12:21 crc kubenswrapper[5107]: I0126 00:12:21.259516 5107 patch_prober.go:28] interesting pod/route-controller-manager-5f5646cf8b-9kgnt container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Jan 26 00:12:21 crc kubenswrapper[5107]: I0126 00:12:21.259595 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" podUID="07c24009-786a-4a05-8c86-b94337ce730e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Jan 26 00:12:21 crc kubenswrapper[5107]: E0126 00:12:21.310974 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:21 crc kubenswrapper[5107]: E0126 00:12:21.312641 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:21 crc kubenswrapper[5107]: E0126 00:12:21.314819 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:21 crc kubenswrapper[5107]: E0126 00:12:21.314921 5107 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" podUID="568b36ce-cb38-401e-afc3-3c6e518c9c1a" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 26 00:12:22 crc kubenswrapper[5107]: I0126 00:12:22.277190 5107 patch_prober.go:28] interesting pod/controller-manager-5b595d7598-z5cmg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" start-of-body= Jan 26 00:12:22 crc kubenswrapper[5107]: I0126 00:12:22.277274 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" podUID="eea6e4fb-6302-4136-827b-04387e5f119f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" Jan 26 00:12:23 crc kubenswrapper[5107]: I0126 00:12:23.751713 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-64rgr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 00:12:23 crc kubenswrapper[5107]: I0126 00:12:23.751821 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-64rgr" podUID="4498876a-5953-499f-aa71-6899b8529dcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 00:12:25 crc kubenswrapper[5107]: I0126 00:12:25.590357 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pwh7s" Jan 26 00:12:26 crc kubenswrapper[5107]: I0126 00:12:26.383077 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-hvxpc_568b36ce-cb38-401e-afc3-3c6e518c9c1a/kube-multus-additional-cni-plugins/0.log" Jan 26 00:12:26 crc kubenswrapper[5107]: I0126 00:12:26.383161 5107 generic.go:358] "Generic (PLEG): container finished" podID="568b36ce-cb38-401e-afc3-3c6e518c9c1a" containerID="ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806" exitCode=137 Jan 26 00:12:26 crc kubenswrapper[5107]: I0126 00:12:26.383304 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" event={"ID":"568b36ce-cb38-401e-afc3-3c6e518c9c1a","Type":"ContainerDied","Data":"ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806"} Jan 26 00:12:30 crc kubenswrapper[5107]: I0126 00:12:30.036455 5107 ???:1] "http: TLS handshake error from 192.168.126.11:38844: no serving certificate available for the kubelet" Jan 26 00:12:30 crc kubenswrapper[5107]: I0126 00:12:30.743172 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:12:31 crc kubenswrapper[5107]: I0126 00:12:31.259849 5107 patch_prober.go:28] interesting pod/route-controller-manager-5f5646cf8b-9kgnt container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Jan 26 00:12:31 crc kubenswrapper[5107]: I0126 00:12:31.260065 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" podUID="07c24009-786a-4a05-8c86-b94337ce730e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Jan 26 00:12:31 crc kubenswrapper[5107]: E0126 00:12:31.307836 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806 is running failed: container process not found" containerID="ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:31 crc kubenswrapper[5107]: E0126 00:12:31.308470 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806 is running failed: container process not found" containerID="ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:31 crc kubenswrapper[5107]: E0126 00:12:31.308734 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806 is running failed: container process not found" containerID="ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:31 crc kubenswrapper[5107]: E0126 00:12:31.308770 5107 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" podUID="568b36ce-cb38-401e-afc3-3c6e518c9c1a" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 26 00:12:32 crc kubenswrapper[5107]: I0126 00:12:32.275651 5107 patch_prober.go:28] interesting pod/controller-manager-5b595d7598-z5cmg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" start-of-body= Jan 26 00:12:32 crc kubenswrapper[5107]: I0126 00:12:32.275773 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" podUID="eea6e4fb-6302-4136-827b-04387e5f119f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" Jan 26 00:12:32 crc kubenswrapper[5107]: I0126 00:12:32.781188 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mbr9b container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 00:12:32 crc kubenswrapper[5107]: I0126 00:12:32.781295 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-68cf44c8b8-mbr9b" podUID="80801f36-b03c-44af-bbaa-4e9a962f9a30" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 00:12:33 crc kubenswrapper[5107]: I0126 00:12:33.753828 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-64rgr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 00:12:33 crc kubenswrapper[5107]: I0126 00:12:33.753921 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-64rgr" podUID="4498876a-5953-499f-aa71-6899b8529dcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 00:12:35 crc kubenswrapper[5107]: I0126 00:12:35.039341 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 26 00:12:35 crc kubenswrapper[5107]: I0126 00:12:35.040306 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="56c0b45c-2648-462a-90aa-ebee1bb3358e" containerName="pruner" Jan 26 00:12:35 crc kubenswrapper[5107]: I0126 00:12:35.040338 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c0b45c-2648-462a-90aa-ebee1bb3358e" containerName="pruner" Jan 26 00:12:35 crc kubenswrapper[5107]: I0126 00:12:35.040394 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="12644171-7711-41d1-9376-76515176916c" containerName="pruner" Jan 26 00:12:35 crc kubenswrapper[5107]: I0126 00:12:35.040406 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="12644171-7711-41d1-9376-76515176916c" containerName="pruner" Jan 26 00:12:35 crc kubenswrapper[5107]: I0126 00:12:35.040567 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="12644171-7711-41d1-9376-76515176916c" containerName="pruner" Jan 26 00:12:35 crc kubenswrapper[5107]: I0126 00:12:35.040585 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="56c0b45c-2648-462a-90aa-ebee1bb3358e" containerName="pruner" Jan 26 00:12:35 crc kubenswrapper[5107]: I0126 00:12:35.409046 5107 pod_container_manager_linux.go:217] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod8e75356d-8170-4619-9539-ea5e50c2b892"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod8e75356d-8170-4619-9539-ea5e50c2b892] : Timed out while waiting for systemd to remove kubepods-burstable-pod8e75356d_8170_4619_9539_ea5e50c2b892.slice" Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.058104 5107 pod_container_manager_linux.go:217] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod1dcc8c3a-74e3-404d-8f0f-cec0001cf476"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod1dcc8c3a-74e3-404d-8f0f-cec0001cf476] : Timed out while waiting for systemd to remove kubepods-burstable-pod1dcc8c3a_74e3_404d_8f0f_cec0001cf476.slice" Jan 26 00:12:37 crc kubenswrapper[5107]: E0126 00:12:37.058460 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable pod1dcc8c3a-74e3-404d-8f0f-cec0001cf476] : unable to destroy cgroup paths for cgroup [kubepods burstable pod1dcc8c3a-74e3-404d-8f0f-cec0001cf476] : Timed out while waiting for systemd to remove kubepods-burstable-pod1dcc8c3a_74e3_404d_8f0f_cec0001cf476.slice" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w" podUID="1dcc8c3a-74e3-404d-8f0f-cec0001cf476" Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.243733 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.243866 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.245552 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.245552 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.259290 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.346105 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.346374 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.348606 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.360031 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.372087 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.372532 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.427413 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.433430 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.532174 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.540768 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.635592 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w" Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.636402 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.638518 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.638306 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.648248 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.674954 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w"] Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.678577 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-6sr6w"] Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.752373 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5838a86a-169c-4e09-85d9-25b6e7ee17bb-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"5838a86a-169c-4e09-85d9-25b6e7ee17bb\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.752663 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5838a86a-169c-4e09-85d9-25b6e7ee17bb-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"5838a86a-169c-4e09-85d9-25b6e7ee17bb\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.854466 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5838a86a-169c-4e09-85d9-25b6e7ee17bb-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"5838a86a-169c-4e09-85d9-25b6e7ee17bb\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.854598 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5838a86a-169c-4e09-85d9-25b6e7ee17bb-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"5838a86a-169c-4e09-85d9-25b6e7ee17bb\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.854713 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5838a86a-169c-4e09-85d9-25b6e7ee17bb-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"5838a86a-169c-4e09-85d9-25b6e7ee17bb\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.891534 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5838a86a-169c-4e09-85d9-25b6e7ee17bb-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"5838a86a-169c-4e09-85d9-25b6e7ee17bb\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:12:37 crc kubenswrapper[5107]: I0126 00:12:37.967791 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:12:38 crc kubenswrapper[5107]: I0126 00:12:38.120481 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dcc8c3a-74e3-404d-8f0f-cec0001cf476" path="/var/lib/kubelet/pods/1dcc8c3a-74e3-404d-8f0f-cec0001cf476/volumes" Jan 26 00:12:38 crc kubenswrapper[5107]: I0126 00:12:38.260105 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-metrics-certs\") pod \"network-metrics-daemon-bdn4m\" (UID: \"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\") " pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:12:38 crc kubenswrapper[5107]: I0126 00:12:38.263070 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 26 00:12:38 crc kubenswrapper[5107]: I0126 00:12:38.275134 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93b5402e-3f3e-4e3b-8cf4-f919871d0c86-metrics-certs\") pod \"network-metrics-daemon-bdn4m\" (UID: \"93b5402e-3f3e-4e3b-8cf4-f919871d0c86\") " pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:12:38 crc kubenswrapper[5107]: I0126 00:12:38.435312 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 26 00:12:38 crc kubenswrapper[5107]: I0126 00:12:38.444565 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bdn4m" Jan 26 00:12:40 crc kubenswrapper[5107]: I0126 00:12:40.640479 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 26 00:12:40 crc kubenswrapper[5107]: I0126 00:12:40.675523 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 26 00:12:40 crc kubenswrapper[5107]: I0126 00:12:40.675722 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:40 crc kubenswrapper[5107]: I0126 00:12:40.763159 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ac12125-d091-4b8b-89ba-b5b821b7a825-kube-api-access\") pod \"installer-12-crc\" (UID: \"9ac12125-d091-4b8b-89ba-b5b821b7a825\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:40 crc kubenswrapper[5107]: I0126 00:12:40.763225 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9ac12125-d091-4b8b-89ba-b5b821b7a825-var-lock\") pod \"installer-12-crc\" (UID: \"9ac12125-d091-4b8b-89ba-b5b821b7a825\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:40 crc kubenswrapper[5107]: I0126 00:12:40.763279 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ac12125-d091-4b8b-89ba-b5b821b7a825-kubelet-dir\") pod \"installer-12-crc\" (UID: \"9ac12125-d091-4b8b-89ba-b5b821b7a825\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:40 crc kubenswrapper[5107]: I0126 00:12:40.864703 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ac12125-d091-4b8b-89ba-b5b821b7a825-kubelet-dir\") pod \"installer-12-crc\" (UID: \"9ac12125-d091-4b8b-89ba-b5b821b7a825\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:40 crc kubenswrapper[5107]: I0126 00:12:40.864807 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ac12125-d091-4b8b-89ba-b5b821b7a825-kube-api-access\") pod \"installer-12-crc\" (UID: \"9ac12125-d091-4b8b-89ba-b5b821b7a825\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:40 crc kubenswrapper[5107]: I0126 00:12:40.864836 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9ac12125-d091-4b8b-89ba-b5b821b7a825-var-lock\") pod \"installer-12-crc\" (UID: \"9ac12125-d091-4b8b-89ba-b5b821b7a825\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:40 crc kubenswrapper[5107]: I0126 00:12:40.864947 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9ac12125-d091-4b8b-89ba-b5b821b7a825-var-lock\") pod \"installer-12-crc\" (UID: \"9ac12125-d091-4b8b-89ba-b5b821b7a825\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:40 crc kubenswrapper[5107]: I0126 00:12:40.865004 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ac12125-d091-4b8b-89ba-b5b821b7a825-kubelet-dir\") pod \"installer-12-crc\" (UID: \"9ac12125-d091-4b8b-89ba-b5b821b7a825\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:40 crc kubenswrapper[5107]: I0126 00:12:40.888292 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ac12125-d091-4b8b-89ba-b5b821b7a825-kube-api-access\") pod \"installer-12-crc\" (UID: \"9ac12125-d091-4b8b-89ba-b5b821b7a825\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:41 crc kubenswrapper[5107]: I0126 00:12:41.005170 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:41 crc kubenswrapper[5107]: I0126 00:12:41.276815 5107 patch_prober.go:28] interesting pod/route-controller-manager-5f5646cf8b-9kgnt container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Jan 26 00:12:41 crc kubenswrapper[5107]: I0126 00:12:41.276928 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" podUID="07c24009-786a-4a05-8c86-b94337ce730e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Jan 26 00:12:41 crc kubenswrapper[5107]: E0126 00:12:41.307213 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806 is running failed: container process not found" containerID="ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:41 crc kubenswrapper[5107]: E0126 00:12:41.307932 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806 is running failed: container process not found" containerID="ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:41 crc kubenswrapper[5107]: E0126 00:12:41.308265 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806 is running failed: container process not found" containerID="ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:41 crc kubenswrapper[5107]: E0126 00:12:41.308368 5107 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" podUID="568b36ce-cb38-401e-afc3-3c6e518c9c1a" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 26 00:12:42 crc kubenswrapper[5107]: I0126 00:12:42.276065 5107 patch_prober.go:28] interesting pod/controller-manager-5b595d7598-z5cmg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" start-of-body= Jan 26 00:12:42 crc kubenswrapper[5107]: I0126 00:12:42.276181 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" podUID="eea6e4fb-6302-4136-827b-04387e5f119f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" Jan 26 00:12:43 crc kubenswrapper[5107]: I0126 00:12:43.752680 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-64rgr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 00:12:43 crc kubenswrapper[5107]: I0126 00:12:43.753491 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-64rgr" podUID="4498876a-5953-499f-aa71-6899b8529dcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 00:12:51 crc kubenswrapper[5107]: E0126 00:12:51.308256 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806 is running failed: container process not found" containerID="ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:51 crc kubenswrapper[5107]: E0126 00:12:51.311114 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806 is running failed: container process not found" containerID="ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:51 crc kubenswrapper[5107]: E0126 00:12:51.311777 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806 is running failed: container process not found" containerID="ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:51 crc kubenswrapper[5107]: E0126 00:12:51.311908 5107 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" podUID="568b36ce-cb38-401e-afc3-3c6e518c9c1a" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.455663 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.460577 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.464698 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-hvxpc_568b36ce-cb38-401e-afc3-3c6e518c9c1a/kube-multus-additional-cni-plugins/0.log" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.464765 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.516285 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7"] Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.517472 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="568b36ce-cb38-401e-afc3-3c6e518c9c1a" containerName="kube-multus-additional-cni-plugins" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.517505 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="568b36ce-cb38-401e-afc3-3c6e518c9c1a" containerName="kube-multus-additional-cni-plugins" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.517543 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="07c24009-786a-4a05-8c86-b94337ce730e" containerName="route-controller-manager" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.517553 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c24009-786a-4a05-8c86-b94337ce730e" containerName="route-controller-manager" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.517567 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eea6e4fb-6302-4136-827b-04387e5f119f" containerName="controller-manager" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.517576 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea6e4fb-6302-4136-827b-04387e5f119f" containerName="controller-manager" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.529224 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="568b36ce-cb38-401e-afc3-3c6e518c9c1a" containerName="kube-multus-additional-cni-plugins" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.529278 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="eea6e4fb-6302-4136-827b-04387e5f119f" containerName="controller-manager" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.529296 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="07c24009-786a-4a05-8c86-b94337ce730e" containerName="route-controller-manager" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.538515 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7"] Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.538746 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.540747 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65677c569c-qmptv"] Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.547996 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.566149 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65677c569c-qmptv"] Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.576565 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" event={"ID":"07c24009-786a-4a05-8c86-b94337ce730e","Type":"ContainerDied","Data":"cf1b46e6d992ca87c41fa348a36348f28852bac86f92eeffd7792c2a965fa386"} Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.576685 5107 scope.go:117] "RemoveContainer" containerID="83342b2fc10cf08d21d8b62c002d08545ae906c4eab0d6d5bf42896574759c57" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.576903 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.584909 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.585159 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-hvxpc" event={"ID":"568b36ce-cb38-401e-afc3-3c6e518c9c1a","Type":"ContainerDied","Data":"1111a711ef028a1578b1e7f4d0c79072e7669296557a9cb959e0d63672fce082"} Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.589321 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" event={"ID":"eea6e4fb-6302-4136-827b-04387e5f119f","Type":"ContainerDied","Data":"622fcb45c49b5d9113762b1e87f1e530772804ffc8e6dcd47b831fdb45c4d07a"} Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.589878 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b595d7598-z5cmg" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.616802 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wrs6\" (UniqueName: \"kubernetes.io/projected/07c24009-786a-4a05-8c86-b94337ce730e-kube-api-access-8wrs6\") pod \"07c24009-786a-4a05-8c86-b94337ce730e\" (UID: \"07c24009-786a-4a05-8c86-b94337ce730e\") " Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.617001 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eea6e4fb-6302-4136-827b-04387e5f119f-serving-cert\") pod \"eea6e4fb-6302-4136-827b-04387e5f119f\" (UID: \"eea6e4fb-6302-4136-827b-04387e5f119f\") " Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.617137 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07c24009-786a-4a05-8c86-b94337ce730e-serving-cert\") pod \"07c24009-786a-4a05-8c86-b94337ce730e\" (UID: \"07c24009-786a-4a05-8c86-b94337ce730e\") " Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.618409 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4hbq\" (UniqueName: \"kubernetes.io/projected/eea6e4fb-6302-4136-827b-04387e5f119f-kube-api-access-l4hbq\") pod \"eea6e4fb-6302-4136-827b-04387e5f119f\" (UID: \"eea6e4fb-6302-4136-827b-04387e5f119f\") " Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.618473 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/568b36ce-cb38-401e-afc3-3c6e518c9c1a-cni-sysctl-allowlist\") pod \"568b36ce-cb38-401e-afc3-3c6e518c9c1a\" (UID: \"568b36ce-cb38-401e-afc3-3c6e518c9c1a\") " Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.618525 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eea6e4fb-6302-4136-827b-04387e5f119f-config\") pod \"eea6e4fb-6302-4136-827b-04387e5f119f\" (UID: \"eea6e4fb-6302-4136-827b-04387e5f119f\") " Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.618568 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07c24009-786a-4a05-8c86-b94337ce730e-client-ca\") pod \"07c24009-786a-4a05-8c86-b94337ce730e\" (UID: \"07c24009-786a-4a05-8c86-b94337ce730e\") " Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.618608 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eea6e4fb-6302-4136-827b-04387e5f119f-client-ca\") pod \"eea6e4fb-6302-4136-827b-04387e5f119f\" (UID: \"eea6e4fb-6302-4136-827b-04387e5f119f\") " Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.618665 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eea6e4fb-6302-4136-827b-04387e5f119f-proxy-ca-bundles\") pod \"eea6e4fb-6302-4136-827b-04387e5f119f\" (UID: \"eea6e4fb-6302-4136-827b-04387e5f119f\") " Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.618705 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/568b36ce-cb38-401e-afc3-3c6e518c9c1a-tuning-conf-dir\") pod \"568b36ce-cb38-401e-afc3-3c6e518c9c1a\" (UID: \"568b36ce-cb38-401e-afc3-3c6e518c9c1a\") " Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.621197 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eea6e4fb-6302-4136-827b-04387e5f119f-tmp\") pod \"eea6e4fb-6302-4136-827b-04387e5f119f\" (UID: \"eea6e4fb-6302-4136-827b-04387e5f119f\") " Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.621350 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v24np\" (UniqueName: \"kubernetes.io/projected/568b36ce-cb38-401e-afc3-3c6e518c9c1a-kube-api-access-v24np\") pod \"568b36ce-cb38-401e-afc3-3c6e518c9c1a\" (UID: \"568b36ce-cb38-401e-afc3-3c6e518c9c1a\") " Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.621488 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c24009-786a-4a05-8c86-b94337ce730e-config\") pod \"07c24009-786a-4a05-8c86-b94337ce730e\" (UID: \"07c24009-786a-4a05-8c86-b94337ce730e\") " Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.621573 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/07c24009-786a-4a05-8c86-b94337ce730e-tmp\") pod \"07c24009-786a-4a05-8c86-b94337ce730e\" (UID: \"07c24009-786a-4a05-8c86-b94337ce730e\") " Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.619067 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/568b36ce-cb38-401e-afc3-3c6e518c9c1a-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "568b36ce-cb38-401e-afc3-3c6e518c9c1a" (UID: "568b36ce-cb38-401e-afc3-3c6e518c9c1a"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.619780 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07c24009-786a-4a05-8c86-b94337ce730e-client-ca" (OuterVolumeSpecName: "client-ca") pod "07c24009-786a-4a05-8c86-b94337ce730e" (UID: "07c24009-786a-4a05-8c86-b94337ce730e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.620356 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eea6e4fb-6302-4136-827b-04387e5f119f-client-ca" (OuterVolumeSpecName: "client-ca") pod "eea6e4fb-6302-4136-827b-04387e5f119f" (UID: "eea6e4fb-6302-4136-827b-04387e5f119f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.620446 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eea6e4fb-6302-4136-827b-04387e5f119f-config" (OuterVolumeSpecName: "config") pod "eea6e4fb-6302-4136-827b-04387e5f119f" (UID: "eea6e4fb-6302-4136-827b-04387e5f119f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.621057 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eea6e4fb-6302-4136-827b-04387e5f119f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "eea6e4fb-6302-4136-827b-04387e5f119f" (UID: "eea6e4fb-6302-4136-827b-04387e5f119f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.621966 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eea6e4fb-6302-4136-827b-04387e5f119f-tmp" (OuterVolumeSpecName: "tmp") pod "eea6e4fb-6302-4136-827b-04387e5f119f" (UID: "eea6e4fb-6302-4136-827b-04387e5f119f"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.622066 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/568b36ce-cb38-401e-afc3-3c6e518c9c1a-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "568b36ce-cb38-401e-afc3-3c6e518c9c1a" (UID: "568b36ce-cb38-401e-afc3-3c6e518c9c1a"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.623726 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/568b36ce-cb38-401e-afc3-3c6e518c9c1a-ready\") pod \"568b36ce-cb38-401e-afc3-3c6e518c9c1a\" (UID: \"568b36ce-cb38-401e-afc3-3c6e518c9c1a\") " Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.623869 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-config\") pod \"controller-manager-65677c569c-qmptv\" (UID: \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\") " pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.623957 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-proxy-ca-bundles\") pod \"controller-manager-65677c569c-qmptv\" (UID: \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\") " pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.624117 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-tmp\") pod \"controller-manager-65677c569c-qmptv\" (UID: \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\") " pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.624181 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36bdac93-b162-4ca6-bbb9-cde31af23bc6-serving-cert\") pod \"route-controller-manager-86685bc4b9-sdjw7\" (UID: \"36bdac93-b162-4ca6-bbb9-cde31af23bc6\") " pod="openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.624215 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-client-ca\") pod \"controller-manager-65677c569c-qmptv\" (UID: \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\") " pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.624257 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/36bdac93-b162-4ca6-bbb9-cde31af23bc6-tmp\") pod \"route-controller-manager-86685bc4b9-sdjw7\" (UID: \"36bdac93-b162-4ca6-bbb9-cde31af23bc6\") " pod="openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.624289 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nktb\" (UniqueName: \"kubernetes.io/projected/36bdac93-b162-4ca6-bbb9-cde31af23bc6-kube-api-access-5nktb\") pod \"route-controller-manager-86685bc4b9-sdjw7\" (UID: \"36bdac93-b162-4ca6-bbb9-cde31af23bc6\") " pod="openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.624397 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b47m\" (UniqueName: \"kubernetes.io/projected/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-kube-api-access-2b47m\") pod \"controller-manager-65677c569c-qmptv\" (UID: \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\") " pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.624429 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-serving-cert\") pod \"controller-manager-65677c569c-qmptv\" (UID: \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\") " pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.624496 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36bdac93-b162-4ca6-bbb9-cde31af23bc6-config\") pod \"route-controller-manager-86685bc4b9-sdjw7\" (UID: \"36bdac93-b162-4ca6-bbb9-cde31af23bc6\") " pod="openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.624515 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36bdac93-b162-4ca6-bbb9-cde31af23bc6-client-ca\") pod \"route-controller-manager-86685bc4b9-sdjw7\" (UID: \"36bdac93-b162-4ca6-bbb9-cde31af23bc6\") " pod="openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.625085 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/568b36ce-cb38-401e-afc3-3c6e518c9c1a-ready" (OuterVolumeSpecName: "ready") pod "568b36ce-cb38-401e-afc3-3c6e518c9c1a" (UID: "568b36ce-cb38-401e-afc3-3c6e518c9c1a"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.625664 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07c24009-786a-4a05-8c86-b94337ce730e-config" (OuterVolumeSpecName: "config") pod "07c24009-786a-4a05-8c86-b94337ce730e" (UID: "07c24009-786a-4a05-8c86-b94337ce730e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.623926 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07c24009-786a-4a05-8c86-b94337ce730e-tmp" (OuterVolumeSpecName: "tmp") pod "07c24009-786a-4a05-8c86-b94337ce730e" (UID: "07c24009-786a-4a05-8c86-b94337ce730e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.684272 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c24009-786a-4a05-8c86-b94337ce730e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "07c24009-786a-4a05-8c86-b94337ce730e" (UID: "07c24009-786a-4a05-8c86-b94337ce730e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.684340 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c24009-786a-4a05-8c86-b94337ce730e-kube-api-access-8wrs6" (OuterVolumeSpecName: "kube-api-access-8wrs6") pod "07c24009-786a-4a05-8c86-b94337ce730e" (UID: "07c24009-786a-4a05-8c86-b94337ce730e"). InnerVolumeSpecName "kube-api-access-8wrs6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.684304 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eea6e4fb-6302-4136-827b-04387e5f119f-kube-api-access-l4hbq" (OuterVolumeSpecName: "kube-api-access-l4hbq") pod "eea6e4fb-6302-4136-827b-04387e5f119f" (UID: "eea6e4fb-6302-4136-827b-04387e5f119f"). InnerVolumeSpecName "kube-api-access-l4hbq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.691129 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea6e4fb-6302-4136-827b-04387e5f119f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "eea6e4fb-6302-4136-827b-04387e5f119f" (UID: "eea6e4fb-6302-4136-827b-04387e5f119f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.693351 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/568b36ce-cb38-401e-afc3-3c6e518c9c1a-kube-api-access-v24np" (OuterVolumeSpecName: "kube-api-access-v24np") pod "568b36ce-cb38-401e-afc3-3c6e518c9c1a" (UID: "568b36ce-cb38-401e-afc3-3c6e518c9c1a"). InnerVolumeSpecName "kube-api-access-v24np". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.716194 5107 scope.go:117] "RemoveContainer" containerID="ea0ef8730a520bb97da0736b2ee2f4e5aff449f75459111973a7e05d9cf45806" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.725504 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5nktb\" (UniqueName: \"kubernetes.io/projected/36bdac93-b162-4ca6-bbb9-cde31af23bc6-kube-api-access-5nktb\") pod \"route-controller-manager-86685bc4b9-sdjw7\" (UID: \"36bdac93-b162-4ca6-bbb9-cde31af23bc6\") " pod="openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.725599 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2b47m\" (UniqueName: \"kubernetes.io/projected/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-kube-api-access-2b47m\") pod \"controller-manager-65677c569c-qmptv\" (UID: \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\") " pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.725647 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-serving-cert\") pod \"controller-manager-65677c569c-qmptv\" (UID: \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\") " pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.725695 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36bdac93-b162-4ca6-bbb9-cde31af23bc6-config\") pod \"route-controller-manager-86685bc4b9-sdjw7\" (UID: \"36bdac93-b162-4ca6-bbb9-cde31af23bc6\") " pod="openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.725715 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36bdac93-b162-4ca6-bbb9-cde31af23bc6-client-ca\") pod \"route-controller-manager-86685bc4b9-sdjw7\" (UID: \"36bdac93-b162-4ca6-bbb9-cde31af23bc6\") " pod="openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.725763 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-config\") pod \"controller-manager-65677c569c-qmptv\" (UID: \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\") " pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.725784 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-proxy-ca-bundles\") pod \"controller-manager-65677c569c-qmptv\" (UID: \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\") " pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.725825 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-tmp\") pod \"controller-manager-65677c569c-qmptv\" (UID: \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\") " pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.725864 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36bdac93-b162-4ca6-bbb9-cde31af23bc6-serving-cert\") pod \"route-controller-manager-86685bc4b9-sdjw7\" (UID: \"36bdac93-b162-4ca6-bbb9-cde31af23bc6\") " pod="openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.725906 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-client-ca\") pod \"controller-manager-65677c569c-qmptv\" (UID: \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\") " pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.725935 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/36bdac93-b162-4ca6-bbb9-cde31af23bc6-tmp\") pod \"route-controller-manager-86685bc4b9-sdjw7\" (UID: \"36bdac93-b162-4ca6-bbb9-cde31af23bc6\") " pod="openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.725989 5107 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/568b36ce-cb38-401e-afc3-3c6e518c9c1a-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.726005 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eea6e4fb-6302-4136-827b-04387e5f119f-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.726017 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07c24009-786a-4a05-8c86-b94337ce730e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.726029 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eea6e4fb-6302-4136-827b-04387e5f119f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.726040 5107 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eea6e4fb-6302-4136-827b-04387e5f119f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.726053 5107 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/568b36ce-cb38-401e-afc3-3c6e518c9c1a-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.726067 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eea6e4fb-6302-4136-827b-04387e5f119f-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.726078 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v24np\" (UniqueName: \"kubernetes.io/projected/568b36ce-cb38-401e-afc3-3c6e518c9c1a-kube-api-access-v24np\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.726090 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c24009-786a-4a05-8c86-b94337ce730e-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.726103 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/07c24009-786a-4a05-8c86-b94337ce730e-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.726115 5107 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/568b36ce-cb38-401e-afc3-3c6e518c9c1a-ready\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.726127 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8wrs6\" (UniqueName: \"kubernetes.io/projected/07c24009-786a-4a05-8c86-b94337ce730e-kube-api-access-8wrs6\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.726139 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eea6e4fb-6302-4136-827b-04387e5f119f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.726152 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07c24009-786a-4a05-8c86-b94337ce730e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.726164 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l4hbq\" (UniqueName: \"kubernetes.io/projected/eea6e4fb-6302-4136-827b-04387e5f119f-kube-api-access-l4hbq\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.726719 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/36bdac93-b162-4ca6-bbb9-cde31af23bc6-tmp\") pod \"route-controller-manager-86685bc4b9-sdjw7\" (UID: \"36bdac93-b162-4ca6-bbb9-cde31af23bc6\") " pod="openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.727782 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-tmp\") pod \"controller-manager-65677c569c-qmptv\" (UID: \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\") " pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.727792 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36bdac93-b162-4ca6-bbb9-cde31af23bc6-client-ca\") pod \"route-controller-manager-86685bc4b9-sdjw7\" (UID: \"36bdac93-b162-4ca6-bbb9-cde31af23bc6\") " pod="openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.728750 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-client-ca\") pod \"controller-manager-65677c569c-qmptv\" (UID: \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\") " pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.729587 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-proxy-ca-bundles\") pod \"controller-manager-65677c569c-qmptv\" (UID: \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\") " pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.732129 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36bdac93-b162-4ca6-bbb9-cde31af23bc6-config\") pod \"route-controller-manager-86685bc4b9-sdjw7\" (UID: \"36bdac93-b162-4ca6-bbb9-cde31af23bc6\") " pod="openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.735059 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36bdac93-b162-4ca6-bbb9-cde31af23bc6-serving-cert\") pod \"route-controller-manager-86685bc4b9-sdjw7\" (UID: \"36bdac93-b162-4ca6-bbb9-cde31af23bc6\") " pod="openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.738084 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-serving-cert\") pod \"controller-manager-65677c569c-qmptv\" (UID: \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\") " pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.752015 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b47m\" (UniqueName: \"kubernetes.io/projected/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-kube-api-access-2b47m\") pod \"controller-manager-65677c569c-qmptv\" (UID: \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\") " pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.760531 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-config\") pod \"controller-manager-65677c569c-qmptv\" (UID: \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\") " pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.783348 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nktb\" (UniqueName: \"kubernetes.io/projected/36bdac93-b162-4ca6-bbb9-cde31af23bc6-kube-api-access-5nktb\") pod \"route-controller-manager-86685bc4b9-sdjw7\" (UID: \"36bdac93-b162-4ca6-bbb9-cde31af23bc6\") " pod="openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.868170 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.889830 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" Jan 26 00:12:51 crc kubenswrapper[5107]: I0126 00:12:51.891743 5107 scope.go:117] "RemoveContainer" containerID="7b3fa7e9d33637c6761528b5de33f638b477b0350be01f08115e70dbd389396f" Jan 26 00:12:52 crc kubenswrapper[5107]: I0126 00:12:52.258676 5107 patch_prober.go:28] interesting pod/route-controller-manager-5f5646cf8b-9kgnt container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 00:12:52 crc kubenswrapper[5107]: I0126 00:12:52.262240 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt" podUID="07c24009-786a-4a05-8c86-b94337ce730e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 00:12:52 crc kubenswrapper[5107]: I0126 00:12:52.261502 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-hvxpc"] Jan 26 00:12:52 crc kubenswrapper[5107]: I0126 00:12:52.262942 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-hvxpc"] Jan 26 00:12:52 crc kubenswrapper[5107]: I0126 00:12:52.262999 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b595d7598-z5cmg"] Jan 26 00:12:52 crc kubenswrapper[5107]: I0126 00:12:52.263018 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5b595d7598-z5cmg"] Jan 26 00:12:52 crc kubenswrapper[5107]: I0126 00:12:52.271946 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt"] Jan 26 00:12:52 crc kubenswrapper[5107]: I0126 00:12:52.284426 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f5646cf8b-9kgnt"] Jan 26 00:12:52 crc kubenswrapper[5107]: I0126 00:12:52.609445 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zd5l8" event={"ID":"638ad5ba-8cd0-49f3-817d-eb8c75ecc863","Type":"ContainerStarted","Data":"6cd8605d50084b36a5d6ceaac6fedc96774ec8fdae1ca0972e18406e170ba31b"} Jan 26 00:12:52 crc kubenswrapper[5107]: I0126 00:12:52.632785 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfr4w" event={"ID":"1d8cc2bf-c61e-4f0a-9bee-068919e02489","Type":"ContainerStarted","Data":"789381cf7ab635512a71d79fcd604d08479c2f8b35a19ac1f3b72d38ecd77b6c"} Jan 26 00:12:52 crc kubenswrapper[5107]: I0126 00:12:52.638682 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-64rgr" event={"ID":"4498876a-5953-499f-aa71-6899b8529dcf","Type":"ContainerStarted","Data":"7354baf120344ed1dac52d3746c943295b30f546dee040fd19a9ac607e60408d"} Jan 26 00:12:52 crc kubenswrapper[5107]: I0126 00:12:52.640044 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-64rgr" Jan 26 00:12:52 crc kubenswrapper[5107]: I0126 00:12:52.640432 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-64rgr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 00:12:52 crc kubenswrapper[5107]: I0126 00:12:52.640490 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-64rgr" podUID="4498876a-5953-499f-aa71-6899b8529dcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 00:12:52 crc kubenswrapper[5107]: I0126 00:12:52.725559 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 26 00:12:52 crc kubenswrapper[5107]: I0126 00:12:52.864134 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bdn4m"] Jan 26 00:12:53 crc kubenswrapper[5107]: I0126 00:12:52.895544 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 26 00:12:53 crc kubenswrapper[5107]: I0126 00:12:53.597986 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65677c569c-qmptv"] Jan 26 00:12:53 crc kubenswrapper[5107]: I0126 00:12:53.667071 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7"] Jan 26 00:12:54 crc kubenswrapper[5107]: I0126 00:12:53.747102 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-64rgr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 00:12:54 crc kubenswrapper[5107]: I0126 00:12:53.747206 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-64rgr" podUID="4498876a-5953-499f-aa71-6899b8529dcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 00:12:54 crc kubenswrapper[5107]: I0126 00:12:54.052662 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"9ac12125-d091-4b8b-89ba-b5b821b7a825","Type":"ContainerStarted","Data":"d567f6da3bc8a8a126853a3fb4ff9d1d6e71fecc021e39d22c9c17dbc4153f41"} Jan 26 00:12:54 crc kubenswrapper[5107]: I0126 00:12:54.062350 5107 generic.go:358] "Generic (PLEG): container finished" podID="1d8cc2bf-c61e-4f0a-9bee-068919e02489" containerID="789381cf7ab635512a71d79fcd604d08479c2f8b35a19ac1f3b72d38ecd77b6c" exitCode=0 Jan 26 00:12:54 crc kubenswrapper[5107]: I0126 00:12:54.062465 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfr4w" event={"ID":"1d8cc2bf-c61e-4f0a-9bee-068919e02489","Type":"ContainerDied","Data":"789381cf7ab635512a71d79fcd604d08479c2f8b35a19ac1f3b72d38ecd77b6c"} Jan 26 00:12:54 crc kubenswrapper[5107]: I0126 00:12:54.072573 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bh5dd" event={"ID":"d71d7360-3eef-4260-b288-7fc9f8d6fecc","Type":"ContainerStarted","Data":"ef9e4e85d61e8fd9e5ef228771ddf68d0714d00cae59cbb1e507c84cbb21dd9f"} Jan 26 00:12:54 crc kubenswrapper[5107]: I0126 00:12:54.079821 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2chhv" event={"ID":"1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741","Type":"ContainerStarted","Data":"4e075291ef63ff967d8d65c5d76fc09ba957bc13112c1a5afd02bc8cb8ed9544"} Jan 26 00:12:54 crc kubenswrapper[5107]: I0126 00:12:54.089812 5107 generic.go:358] "Generic (PLEG): container finished" podID="8eedae47-54cd-438f-93d5-73b21a1fb540" containerID="607dd323faded728597c15559c17dfe1ff5ad380c36ed01979600857cc2d4938" exitCode=0 Jan 26 00:12:54 crc kubenswrapper[5107]: I0126 00:12:54.089906 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vk79k" event={"ID":"8eedae47-54cd-438f-93d5-73b21a1fb540","Type":"ContainerDied","Data":"607dd323faded728597c15559c17dfe1ff5ad380c36ed01979600857cc2d4938"} Jan 26 00:12:54 crc kubenswrapper[5107]: I0126 00:12:54.092705 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"27755a46907505f8628f0476d1375c9e02aa0d15052af9d75e2295187633e593"} Jan 26 00:12:54 crc kubenswrapper[5107]: I0126 00:12:54.140772 5107 generic.go:358] "Generic (PLEG): container finished" podID="b2f8e393-1ed3-4475-bd0b-e0af8867a07a" containerID="0476b02eb4481b60b1f7622bb55be837270ec85faa0a7248a18d7b563efb96dd" exitCode=0 Jan 26 00:12:54 crc kubenswrapper[5107]: I0126 00:12:54.140937 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j26gs" event={"ID":"b2f8e393-1ed3-4475-bd0b-e0af8867a07a","Type":"ContainerDied","Data":"0476b02eb4481b60b1f7622bb55be837270ec85faa0a7248a18d7b563efb96dd"} Jan 26 00:12:54 crc kubenswrapper[5107]: I0126 00:12:54.170817 5107 generic.go:358] "Generic (PLEG): container finished" podID="638ad5ba-8cd0-49f3-817d-eb8c75ecc863" containerID="6cd8605d50084b36a5d6ceaac6fedc96774ec8fdae1ca0972e18406e170ba31b" exitCode=0 Jan 26 00:12:54 crc kubenswrapper[5107]: I0126 00:12:54.178487 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07c24009-786a-4a05-8c86-b94337ce730e" path="/var/lib/kubelet/pods/07c24009-786a-4a05-8c86-b94337ce730e/volumes" Jan 26 00:12:54 crc kubenswrapper[5107]: I0126 00:12:54.181353 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="568b36ce-cb38-401e-afc3-3c6e518c9c1a" path="/var/lib/kubelet/pods/568b36ce-cb38-401e-afc3-3c6e518c9c1a/volumes" Jan 26 00:12:54 crc kubenswrapper[5107]: I0126 00:12:54.206332 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eea6e4fb-6302-4136-827b-04387e5f119f" path="/var/lib/kubelet/pods/eea6e4fb-6302-4136-827b-04387e5f119f/volumes" Jan 26 00:12:54 crc kubenswrapper[5107]: I0126 00:12:54.209293 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zd5l8" event={"ID":"638ad5ba-8cd0-49f3-817d-eb8c75ecc863","Type":"ContainerDied","Data":"6cd8605d50084b36a5d6ceaac6fedc96774ec8fdae1ca0972e18406e170ba31b"} Jan 26 00:12:54 crc kubenswrapper[5107]: I0126 00:12:54.209342 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrc58" event={"ID":"b89f5a05-23c2-41e1-98b3-22ba5035191f","Type":"ContainerStarted","Data":"31b587e42148a5a04645b44e0e7d20cf4d7c82e7b74a9e124a749dd19f354a23"} Jan 26 00:12:54 crc kubenswrapper[5107]: I0126 00:12:54.223454 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bdn4m" event={"ID":"93b5402e-3f3e-4e3b-8cf4-f919871d0c86","Type":"ContainerStarted","Data":"be13c7d5ba37f5e52518b508a9a9a7938fab49e80a9552ba7d2df4aad756ec16"} Jan 26 00:12:54 crc kubenswrapper[5107]: I0126 00:12:54.223557 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"5838a86a-169c-4e09-85d9-25b6e7ee17bb","Type":"ContainerStarted","Data":"12b8af083095057d3f3844980709b7ec5902e9f4213a4cdb925c423aaa648cf1"} Jan 26 00:12:54 crc kubenswrapper[5107]: I0126 00:12:54.223585 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbddn" event={"ID":"c0c7bec4-aeda-4946-9599-726d61c41d93","Type":"ContainerStarted","Data":"17b412020faa68cbabd172ad15478d5ec84b05e4fc24012a2171a8483c4c1037"} Jan 26 00:12:54 crc kubenswrapper[5107]: I0126 00:12:54.862404 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-64rgr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 00:12:54 crc kubenswrapper[5107]: I0126 00:12:54.862482 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-64rgr" podUID="4498876a-5953-499f-aa71-6899b8529dcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 00:12:55 crc kubenswrapper[5107]: I0126 00:12:55.382483 5107 generic.go:358] "Generic (PLEG): container finished" podID="c0c7bec4-aeda-4946-9599-726d61c41d93" containerID="17b412020faa68cbabd172ad15478d5ec84b05e4fc24012a2171a8483c4c1037" exitCode=0 Jan 26 00:12:55 crc kubenswrapper[5107]: I0126 00:12:55.383011 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbddn" event={"ID":"c0c7bec4-aeda-4946-9599-726d61c41d93","Type":"ContainerDied","Data":"17b412020faa68cbabd172ad15478d5ec84b05e4fc24012a2171a8483c4c1037"} Jan 26 00:12:55 crc kubenswrapper[5107]: I0126 00:12:55.390058 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7" event={"ID":"36bdac93-b162-4ca6-bbb9-cde31af23bc6","Type":"ContainerStarted","Data":"d9a9c9385c69e132e3ed9095a5b25416ff219e8b65dd6397fec93a5422ad12fa"} Jan 26 00:12:55 crc kubenswrapper[5107]: I0126 00:12:55.394642 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"d236958989f14a8f56247e00981176734412882f2b01ed0ace21b82a6f6102df"} Jan 26 00:12:55 crc kubenswrapper[5107]: I0126 00:12:55.397023 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfr4w" event={"ID":"1d8cc2bf-c61e-4f0a-9bee-068919e02489","Type":"ContainerStarted","Data":"98d2032ad9fc86b0950204045ab6ddfadbe5bc9159f179a91a78da8692b7ceeb"} Jan 26 00:12:55 crc kubenswrapper[5107]: I0126 00:12:55.398017 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" event={"ID":"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8","Type":"ContainerStarted","Data":"c81ce04f19c6b075c36b0d1a80c8265905fb82a791f78a4d52cce8c48b50f01b"} Jan 26 00:12:55 crc kubenswrapper[5107]: I0126 00:12:55.399908 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"9fb1f835ae0cf49641d1b5b447acbfb0a6764e3a251713383ceddcb2a883b5ab"} Jan 26 00:12:55 crc kubenswrapper[5107]: I0126 00:12:55.707506 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-64rgr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 00:12:55 crc kubenswrapper[5107]: I0126 00:12:55.707652 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-64rgr" podUID="4498876a-5953-499f-aa71-6899b8529dcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 00:12:56 crc kubenswrapper[5107]: I0126 00:12:56.413925 5107 generic.go:358] "Generic (PLEG): container finished" podID="d71d7360-3eef-4260-b288-7fc9f8d6fecc" containerID="ef9e4e85d61e8fd9e5ef228771ddf68d0714d00cae59cbb1e507c84cbb21dd9f" exitCode=0 Jan 26 00:12:56 crc kubenswrapper[5107]: I0126 00:12:56.414027 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bh5dd" event={"ID":"d71d7360-3eef-4260-b288-7fc9f8d6fecc","Type":"ContainerDied","Data":"ef9e4e85d61e8fd9e5ef228771ddf68d0714d00cae59cbb1e507c84cbb21dd9f"} Jan 26 00:12:56 crc kubenswrapper[5107]: I0126 00:12:56.418052 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"f307e75e4f59bf0c875e131a844e5d71bc46ca7d34e29387c5c4f819fe685594"} Jan 26 00:12:57 crc kubenswrapper[5107]: I0126 00:12:57.453720 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zd5l8" event={"ID":"638ad5ba-8cd0-49f3-817d-eb8c75ecc863","Type":"ContainerStarted","Data":"613771156e276340943201193d15d2878272cd5d48f7fbdf610dc106b79fd6ad"} Jan 26 00:12:57 crc kubenswrapper[5107]: I0126 00:12:57.471195 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7" event={"ID":"36bdac93-b162-4ca6-bbb9-cde31af23bc6","Type":"ContainerStarted","Data":"460cc97a8269de781160ea8cfe9e84a86566ba203d63008198c8e0081698739d"} Jan 26 00:12:57 crc kubenswrapper[5107]: I0126 00:12:57.508199 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vk79k" event={"ID":"8eedae47-54cd-438f-93d5-73b21a1fb540","Type":"ContainerStarted","Data":"34b27be6dfe10bb375bd115c261c824c99a969baadc2212ee2fc2f5d6a8d2cdd"} Jan 26 00:12:57 crc kubenswrapper[5107]: I0126 00:12:57.520299 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j26gs" event={"ID":"b2f8e393-1ed3-4475-bd0b-e0af8867a07a","Type":"ContainerStarted","Data":"b112e2d375a4a874f3e1836260e9227648dfec06b82235a8f7ca11c00e5377b1"} Jan 26 00:12:57 crc kubenswrapper[5107]: I0126 00:12:57.520704 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:12:57 crc kubenswrapper[5107]: I0126 00:12:57.554858 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vk79k" podStartSLOduration=21.063453533 podStartE2EDuration="55.554827006s" podCreationTimestamp="2026-01-26 00:12:02 +0000 UTC" firstStartedPulling="2026-01-26 00:12:09.176670121 +0000 UTC m=+174.094264467" lastFinishedPulling="2026-01-26 00:12:43.668043594 +0000 UTC m=+208.585637940" observedRunningTime="2026-01-26 00:12:57.533776653 +0000 UTC m=+222.451371029" watchObservedRunningTime="2026-01-26 00:12:57.554827006 +0000 UTC m=+222.472421352" Jan 26 00:12:57 crc kubenswrapper[5107]: I0126 00:12:57.582203 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bfr4w" podStartSLOduration=11.159232844 podStartE2EDuration="57.582181859s" podCreationTimestamp="2026-01-26 00:12:00 +0000 UTC" firstStartedPulling="2026-01-26 00:12:04.971476933 +0000 UTC m=+169.889071279" lastFinishedPulling="2026-01-26 00:12:51.394425948 +0000 UTC m=+216.312020294" observedRunningTime="2026-01-26 00:12:57.578600006 +0000 UTC m=+222.496194362" watchObservedRunningTime="2026-01-26 00:12:57.582181859 +0000 UTC m=+222.499776205" Jan 26 00:12:58 crc kubenswrapper[5107]: I0126 00:12:58.852000 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bdn4m" event={"ID":"93b5402e-3f3e-4e3b-8cf4-f919871d0c86","Type":"ContainerStarted","Data":"01cdbba9d0f0503db8e03e6a2d80d0f494ba8842b50d55dab00b2ed34b98cb51"} Jan 26 00:12:58 crc kubenswrapper[5107]: I0126 00:12:58.855481 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"5838a86a-169c-4e09-85d9-25b6e7ee17bb","Type":"ContainerStarted","Data":"1707d9f352c3d1dc46fd2664a0e5f18f01303d11bfde7aca72107cfd64c90289"} Jan 26 00:12:58 crc kubenswrapper[5107]: I0126 00:12:58.856824 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"9ac12125-d091-4b8b-89ba-b5b821b7a825","Type":"ContainerStarted","Data":"75fa3b2f0066620ad3ea188036e696142bb91d9e2b86acff05551ed2d996b344"} Jan 26 00:12:58 crc kubenswrapper[5107]: I0126 00:12:58.859193 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"086e197357278a5464e2d45f0d9c28974bef40df42a1989a4a76d3ce1bf7648b"} Jan 26 00:12:58 crc kubenswrapper[5107]: I0126 00:12:58.860955 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" event={"ID":"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8","Type":"ContainerStarted","Data":"ce2db7f715e78303467bbaa7d682554703bfd9f3ee99cb9bd4571e6402d3a2c8"} Jan 26 00:12:58 crc kubenswrapper[5107]: I0126 00:12:58.862255 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"30b5b39a4fd8e43e5dc28a60ce3c0f8d0a7eb4ce38545d45246e066d010e5a4d"} Jan 26 00:12:59 crc kubenswrapper[5107]: I0126 00:12:59.730686 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" Jan 26 00:12:59 crc kubenswrapper[5107]: I0126 00:12:59.732171 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7" Jan 26 00:12:59 crc kubenswrapper[5107]: I0126 00:12:59.741277 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" Jan 26 00:12:59 crc kubenswrapper[5107]: I0126 00:12:59.832908 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7" Jan 26 00:12:59 crc kubenswrapper[5107]: I0126 00:12:59.867982 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=24.867962429 podStartE2EDuration="24.867962429s" podCreationTimestamp="2026-01-26 00:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:59.865635072 +0000 UTC m=+224.783229418" watchObservedRunningTime="2026-01-26 00:12:59.867962429 +0000 UTC m=+224.785556775" Jan 26 00:12:59 crc kubenswrapper[5107]: I0126 00:12:59.893737 5107 generic.go:358] "Generic (PLEG): container finished" podID="1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741" containerID="4e075291ef63ff967d8d65c5d76fc09ba957bc13112c1a5afd02bc8cb8ed9544" exitCode=0 Jan 26 00:12:59 crc kubenswrapper[5107]: I0126 00:12:59.894500 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2chhv" event={"ID":"1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741","Type":"ContainerDied","Data":"4e075291ef63ff967d8d65c5d76fc09ba957bc13112c1a5afd02bc8cb8ed9544"} Jan 26 00:12:59 crc kubenswrapper[5107]: I0126 00:12:59.908849 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=19.908444997 podStartE2EDuration="19.908444997s" podCreationTimestamp="2026-01-26 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:59.905284367 +0000 UTC m=+224.822878723" watchObservedRunningTime="2026-01-26 00:12:59.908444997 +0000 UTC m=+224.826039353" Jan 26 00:12:59 crc kubenswrapper[5107]: I0126 00:12:59.926530 5107 generic.go:358] "Generic (PLEG): container finished" podID="b89f5a05-23c2-41e1-98b3-22ba5035191f" containerID="31b587e42148a5a04645b44e0e7d20cf4d7c82e7b74a9e124a749dd19f354a23" exitCode=0 Jan 26 00:12:59 crc kubenswrapper[5107]: I0126 00:12:59.926730 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrc58" event={"ID":"b89f5a05-23c2-41e1-98b3-22ba5035191f","Type":"ContainerDied","Data":"31b587e42148a5a04645b44e0e7d20cf4d7c82e7b74a9e124a749dd19f354a23"} Jan 26 00:12:59 crc kubenswrapper[5107]: I0126 00:12:59.970020 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j26gs" podStartSLOduration=15.443219149 podStartE2EDuration="57.969997789s" podCreationTimestamp="2026-01-26 00:12:02 +0000 UTC" firstStartedPulling="2026-01-26 00:12:09.171282859 +0000 UTC m=+174.088877205" lastFinishedPulling="2026-01-26 00:12:51.698061499 +0000 UTC m=+216.615655845" observedRunningTime="2026-01-26 00:12:59.944304414 +0000 UTC m=+224.861898760" watchObservedRunningTime="2026-01-26 00:12:59.969997789 +0000 UTC m=+224.887592135" Jan 26 00:13:00 crc kubenswrapper[5107]: I0126 00:13:00.053676 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbddn" event={"ID":"c0c7bec4-aeda-4946-9599-726d61c41d93","Type":"ContainerStarted","Data":"4766a318aa053ca1b5b81553962d132be2196a9749a08ae6d6c74ccc97fc5675"} Jan 26 00:13:00 crc kubenswrapper[5107]: I0126 00:13:00.070747 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7" podStartSLOduration=43.070726533 podStartE2EDuration="43.070726533s" podCreationTimestamp="2026-01-26 00:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:13:00.006247557 +0000 UTC m=+224.923841923" watchObservedRunningTime="2026-01-26 00:13:00.070726533 +0000 UTC m=+224.988320879" Jan 26 00:13:00 crc kubenswrapper[5107]: I0126 00:13:00.094766 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bh5dd" event={"ID":"d71d7360-3eef-4260-b288-7fc9f8d6fecc","Type":"ContainerStarted","Data":"c17e5e951e3f10c30c7488ada651ed12e6f0a0893b9447b8b18ae6da7137ed70"} Jan 26 00:13:00 crc kubenswrapper[5107]: I0126 00:13:00.316820 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" podStartSLOduration=43.316776876 podStartE2EDuration="43.316776876s" podCreationTimestamp="2026-01-26 00:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:13:00.079453092 +0000 UTC m=+224.997047438" watchObservedRunningTime="2026-01-26 00:13:00.316776876 +0000 UTC m=+225.234371222" Jan 26 00:13:00 crc kubenswrapper[5107]: I0126 00:13:00.413955 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zd5l8" podStartSLOduration=18.186784533 podStartE2EDuration="1m0.413923026s" podCreationTimestamp="2026-01-26 00:12:00 +0000 UTC" firstStartedPulling="2026-01-26 00:12:09.17132099 +0000 UTC m=+174.088915346" lastFinishedPulling="2026-01-26 00:12:51.398459503 +0000 UTC m=+216.316053839" observedRunningTime="2026-01-26 00:13:00.260310539 +0000 UTC m=+225.177904905" watchObservedRunningTime="2026-01-26 00:13:00.413923026 +0000 UTC m=+225.331517372" Jan 26 00:13:00 crc kubenswrapper[5107]: I0126 00:13:00.727236 5107 patch_prober.go:28] interesting pod/machine-config-daemon-94c4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:13:00 crc kubenswrapper[5107]: I0126 00:13:00.727382 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:13:01 crc kubenswrapper[5107]: I0126 00:13:01.128968 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gbddn" podStartSLOduration=18.664094584 podStartE2EDuration="1m1.128949383s" podCreationTimestamp="2026-01-26 00:12:00 +0000 UTC" firstStartedPulling="2026-01-26 00:12:09.17557073 +0000 UTC m=+174.093165076" lastFinishedPulling="2026-01-26 00:12:51.640425529 +0000 UTC m=+216.558019875" observedRunningTime="2026-01-26 00:13:01.124901567 +0000 UTC m=+226.042495913" watchObservedRunningTime="2026-01-26 00:13:01.128949383 +0000 UTC m=+226.046543729" Jan 26 00:13:01 crc kubenswrapper[5107]: I0126 00:13:01.233287 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gbddn" Jan 26 00:13:01 crc kubenswrapper[5107]: I0126 00:13:01.233380 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-gbddn" Jan 26 00:13:01 crc kubenswrapper[5107]: I0126 00:13:01.258156 5107 generic.go:358] "Generic (PLEG): container finished" podID="5838a86a-169c-4e09-85d9-25b6e7ee17bb" containerID="1707d9f352c3d1dc46fd2664a0e5f18f01303d11bfde7aca72107cfd64c90289" exitCode=0 Jan 26 00:13:01 crc kubenswrapper[5107]: I0126 00:13:01.258245 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"5838a86a-169c-4e09-85d9-25b6e7ee17bb","Type":"ContainerDied","Data":"1707d9f352c3d1dc46fd2664a0e5f18f01303d11bfde7aca72107cfd64c90289"} Jan 26 00:13:01 crc kubenswrapper[5107]: I0126 00:13:01.284391 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zd5l8" Jan 26 00:13:01 crc kubenswrapper[5107]: I0126 00:13:01.284575 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-zd5l8" Jan 26 00:13:01 crc kubenswrapper[5107]: I0126 00:13:01.306480 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bfr4w" Jan 26 00:13:01 crc kubenswrapper[5107]: I0126 00:13:01.306553 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-bfr4w" Jan 26 00:13:01 crc kubenswrapper[5107]: I0126 00:13:01.941110 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bh5dd" podStartSLOduration=15.221349656 podStartE2EDuration="1m1.94107368s" podCreationTimestamp="2026-01-26 00:12:00 +0000 UTC" firstStartedPulling="2026-01-26 00:12:04.977547023 +0000 UTC m=+169.895141369" lastFinishedPulling="2026-01-26 00:12:51.697271047 +0000 UTC m=+216.614865393" observedRunningTime="2026-01-26 00:13:01.939833934 +0000 UTC m=+226.857428280" watchObservedRunningTime="2026-01-26 00:13:01.94107368 +0000 UTC m=+226.858668026" Jan 26 00:13:02 crc kubenswrapper[5107]: I0126 00:13:02.270303 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bdn4m" event={"ID":"93b5402e-3f3e-4e3b-8cf4-f919871d0c86","Type":"ContainerStarted","Data":"74dd70057236a15893d917ea7e640acc3dc2794944344d951e8d5c7e2689e27d"} Jan 26 00:13:02 crc kubenswrapper[5107]: I0126 00:13:02.495600 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-zd5l8" podUID="638ad5ba-8cd0-49f3-817d-eb8c75ecc863" containerName="registry-server" probeResult="failure" output=< Jan 26 00:13:02 crc kubenswrapper[5107]: timeout: failed to connect service ":50051" within 1s Jan 26 00:13:02 crc kubenswrapper[5107]: > Jan 26 00:13:02 crc kubenswrapper[5107]: I0126 00:13:02.533545 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-gbddn" podUID="c0c7bec4-aeda-4946-9599-726d61c41d93" containerName="registry-server" probeResult="failure" output=< Jan 26 00:13:02 crc kubenswrapper[5107]: timeout: failed to connect service ":50051" within 1s Jan 26 00:13:02 crc kubenswrapper[5107]: > Jan 26 00:13:02 crc kubenswrapper[5107]: I0126 00:13:02.540065 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-bfr4w" podUID="1d8cc2bf-c61e-4f0a-9bee-068919e02489" containerName="registry-server" probeResult="failure" output=< Jan 26 00:13:02 crc kubenswrapper[5107]: timeout: failed to connect service ":50051" within 1s Jan 26 00:13:02 crc kubenswrapper[5107]: > Jan 26 00:13:02 crc kubenswrapper[5107]: I0126 00:13:02.542848 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-j26gs" Jan 26 00:13:02 crc kubenswrapper[5107]: I0126 00:13:02.542938 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j26gs" Jan 26 00:13:03 crc kubenswrapper[5107]: I0126 00:13:03.310732 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2chhv" event={"ID":"1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741","Type":"ContainerStarted","Data":"4142fce0a9fb522dddaa4d0b5f63e9df61e441757754d34465acd838f437a033"} Jan 26 00:13:03 crc kubenswrapper[5107]: I0126 00:13:03.337310 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2chhv" podStartSLOduration=18.807026432 podStartE2EDuration="1m0.337292536s" podCreationTimestamp="2026-01-26 00:12:03 +0000 UTC" firstStartedPulling="2026-01-26 00:12:10.237952284 +0000 UTC m=+175.155546630" lastFinishedPulling="2026-01-26 00:12:51.768218388 +0000 UTC m=+216.685812734" observedRunningTime="2026-01-26 00:13:03.335632009 +0000 UTC m=+228.253226375" watchObservedRunningTime="2026-01-26 00:13:03.337292536 +0000 UTC m=+228.254886882" Jan 26 00:13:03 crc kubenswrapper[5107]: I0126 00:13:03.358670 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrc58" event={"ID":"b89f5a05-23c2-41e1-98b3-22ba5035191f","Type":"ContainerStarted","Data":"8ca89379a16ead32839334d508c1e47635d9c0035647b3e218ae7d8eddfcacfe"} Jan 26 00:13:03 crc kubenswrapper[5107]: I0126 00:13:03.692028 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-vk79k" Jan 26 00:13:03 crc kubenswrapper[5107]: I0126 00:13:03.694380 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vk79k" Jan 26 00:13:03 crc kubenswrapper[5107]: I0126 00:13:03.746370 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-j26gs" podUID="b2f8e393-1ed3-4475-bd0b-e0af8867a07a" containerName="registry-server" probeResult="failure" output=< Jan 26 00:13:03 crc kubenswrapper[5107]: timeout: failed to connect service ":50051" within 1s Jan 26 00:13:03 crc kubenswrapper[5107]: > Jan 26 00:13:03 crc kubenswrapper[5107]: I0126 00:13:03.748694 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-64rgr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 00:13:03 crc kubenswrapper[5107]: I0126 00:13:03.748808 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-64rgr" podUID="4498876a-5953-499f-aa71-6899b8529dcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 00:13:03 crc kubenswrapper[5107]: I0126 00:13:03.749616 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-bdn4m" podStartSLOduration=202.749598178 podStartE2EDuration="3m22.749598178s" podCreationTimestamp="2026-01-26 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:13:03.74720435 +0000 UTC m=+228.664798706" watchObservedRunningTime="2026-01-26 00:13:03.749598178 +0000 UTC m=+228.667192544" Jan 26 00:13:03 crc kubenswrapper[5107]: I0126 00:13:03.760821 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lrc58" podStartSLOduration=19.077599753 podStartE2EDuration="1m0.760770548s" podCreationTimestamp="2026-01-26 00:12:03 +0000 UTC" firstStartedPulling="2026-01-26 00:12:10.245455264 +0000 UTC m=+175.163049610" lastFinishedPulling="2026-01-26 00:12:51.928626059 +0000 UTC m=+216.846220405" observedRunningTime="2026-01-26 00:13:03.58476934 +0000 UTC m=+228.502363686" watchObservedRunningTime="2026-01-26 00:13:03.760770548 +0000 UTC m=+228.678364894" Jan 26 00:13:03 crc kubenswrapper[5107]: I0126 00:13:03.837367 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vk79k" Jan 26 00:13:04 crc kubenswrapper[5107]: I0126 00:13:04.527665 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vk79k" Jan 26 00:13:04 crc kubenswrapper[5107]: I0126 00:13:04.599210 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:13:04 crc kubenswrapper[5107]: I0126 00:13:04.803469 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5838a86a-169c-4e09-85d9-25b6e7ee17bb-kube-api-access\") pod \"5838a86a-169c-4e09-85d9-25b6e7ee17bb\" (UID: \"5838a86a-169c-4e09-85d9-25b6e7ee17bb\") " Jan 26 00:13:04 crc kubenswrapper[5107]: I0126 00:13:04.803714 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5838a86a-169c-4e09-85d9-25b6e7ee17bb-kubelet-dir\") pod \"5838a86a-169c-4e09-85d9-25b6e7ee17bb\" (UID: \"5838a86a-169c-4e09-85d9-25b6e7ee17bb\") " Jan 26 00:13:04 crc kubenswrapper[5107]: I0126 00:13:04.803907 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5838a86a-169c-4e09-85d9-25b6e7ee17bb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5838a86a-169c-4e09-85d9-25b6e7ee17bb" (UID: "5838a86a-169c-4e09-85d9-25b6e7ee17bb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:13:04 crc kubenswrapper[5107]: I0126 00:13:04.804373 5107 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5838a86a-169c-4e09-85d9-25b6e7ee17bb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:04 crc kubenswrapper[5107]: I0126 00:13:04.823347 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5838a86a-169c-4e09-85d9-25b6e7ee17bb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5838a86a-169c-4e09-85d9-25b6e7ee17bb" (UID: "5838a86a-169c-4e09-85d9-25b6e7ee17bb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:04 crc kubenswrapper[5107]: I0126 00:13:04.905572 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5838a86a-169c-4e09-85d9-25b6e7ee17bb-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:05 crc kubenswrapper[5107]: I0126 00:13:05.400019 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:13:05 crc kubenswrapper[5107]: I0126 00:13:05.400504 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"5838a86a-169c-4e09-85d9-25b6e7ee17bb","Type":"ContainerDied","Data":"12b8af083095057d3f3844980709b7ec5902e9f4213a4cdb925c423aaa648cf1"} Jan 26 00:13:05 crc kubenswrapper[5107]: I0126 00:13:05.400541 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12b8af083095057d3f3844980709b7ec5902e9f4213a4cdb925c423aaa648cf1" Jan 26 00:13:05 crc kubenswrapper[5107]: I0126 00:13:05.670710 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-64rgr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 00:13:05 crc kubenswrapper[5107]: I0126 00:13:05.671166 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-64rgr" podUID="4498876a-5953-499f-aa71-6899b8529dcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 00:13:05 crc kubenswrapper[5107]: I0126 00:13:05.932143 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2chhv" Jan 26 00:13:05 crc kubenswrapper[5107]: I0126 00:13:05.932216 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-2chhv" Jan 26 00:13:06 crc kubenswrapper[5107]: I0126 00:13:06.052867 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lrc58" Jan 26 00:13:06 crc kubenswrapper[5107]: I0126 00:13:06.054040 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-lrc58" Jan 26 00:13:06 crc kubenswrapper[5107]: I0126 00:13:06.994661 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vk79k"] Jan 26 00:13:07 crc kubenswrapper[5107]: I0126 00:13:07.044239 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2chhv" podUID="1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741" containerName="registry-server" probeResult="failure" output=< Jan 26 00:13:07 crc kubenswrapper[5107]: timeout: failed to connect service ":50051" within 1s Jan 26 00:13:07 crc kubenswrapper[5107]: > Jan 26 00:13:07 crc kubenswrapper[5107]: I0126 00:13:07.130480 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lrc58" podUID="b89f5a05-23c2-41e1-98b3-22ba5035191f" containerName="registry-server" probeResult="failure" output=< Jan 26 00:13:07 crc kubenswrapper[5107]: timeout: failed to connect service ":50051" within 1s Jan 26 00:13:07 crc kubenswrapper[5107]: > Jan 26 00:13:07 crc kubenswrapper[5107]: I0126 00:13:07.599802 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vk79k" podUID="8eedae47-54cd-438f-93d5-73b21a1fb540" containerName="registry-server" containerID="cri-o://34b27be6dfe10bb375bd115c261c824c99a969baadc2212ee2fc2f5d6a8d2cdd" gracePeriod=2 Jan 26 00:13:11 crc kubenswrapper[5107]: I0126 00:13:11.027067 5107 ???:1] "http: TLS handshake error from 192.168.126.11:34876: no serving certificate available for the kubelet" Jan 26 00:13:11 crc kubenswrapper[5107]: I0126 00:13:11.216630 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bh5dd" Jan 26 00:13:11 crc kubenswrapper[5107]: I0126 00:13:11.216704 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-bh5dd" Jan 26 00:13:11 crc kubenswrapper[5107]: I0126 00:13:11.281154 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bh5dd" Jan 26 00:13:11 crc kubenswrapper[5107]: I0126 00:13:11.286402 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gbddn" Jan 26 00:13:11 crc kubenswrapper[5107]: I0126 00:13:11.415111 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zd5l8" Jan 26 00:13:11 crc kubenswrapper[5107]: I0126 00:13:11.432163 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gbddn" Jan 26 00:13:11 crc kubenswrapper[5107]: I0126 00:13:11.452756 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bfr4w" Jan 26 00:13:11 crc kubenswrapper[5107]: I0126 00:13:11.493184 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zd5l8" Jan 26 00:13:11 crc kubenswrapper[5107]: I0126 00:13:11.532046 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bfr4w" Jan 26 00:13:11 crc kubenswrapper[5107]: I0126 00:13:11.629743 5107 generic.go:358] "Generic (PLEG): container finished" podID="8eedae47-54cd-438f-93d5-73b21a1fb540" containerID="34b27be6dfe10bb375bd115c261c824c99a969baadc2212ee2fc2f5d6a8d2cdd" exitCode=0 Jan 26 00:13:11 crc kubenswrapper[5107]: I0126 00:13:11.629846 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vk79k" event={"ID":"8eedae47-54cd-438f-93d5-73b21a1fb540","Type":"ContainerDied","Data":"34b27be6dfe10bb375bd115c261c824c99a969baadc2212ee2fc2f5d6a8d2cdd"} Jan 26 00:13:11 crc kubenswrapper[5107]: I0126 00:13:11.703024 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bh5dd" Jan 26 00:13:12 crc kubenswrapper[5107]: I0126 00:13:12.613719 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j26gs" Jan 26 00:13:12 crc kubenswrapper[5107]: I0126 00:13:12.685974 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j26gs" Jan 26 00:13:13 crc kubenswrapper[5107]: I0126 00:13:13.551166 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zd5l8"] Jan 26 00:13:13 crc kubenswrapper[5107]: I0126 00:13:13.551500 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zd5l8" podUID="638ad5ba-8cd0-49f3-817d-eb8c75ecc863" containerName="registry-server" containerID="cri-o://613771156e276340943201193d15d2878272cd5d48f7fbdf610dc106b79fd6ad" gracePeriod=2 Jan 26 00:13:13 crc kubenswrapper[5107]: I0126 00:13:13.719332 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bfr4w"] Jan 26 00:13:13 crc kubenswrapper[5107]: I0126 00:13:13.719759 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bfr4w" podUID="1d8cc2bf-c61e-4f0a-9bee-068919e02489" containerName="registry-server" containerID="cri-o://98d2032ad9fc86b0950204045ab6ddfadbe5bc9159f179a91a78da8692b7ceeb" gracePeriod=2 Jan 26 00:13:13 crc kubenswrapper[5107]: I0126 00:13:13.747073 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-64rgr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 00:13:13 crc kubenswrapper[5107]: I0126 00:13:13.747184 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-64rgr" podUID="4498876a-5953-499f-aa71-6899b8529dcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 00:13:13 crc kubenswrapper[5107]: I0126 00:13:13.747262 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-64rgr" Jan 26 00:13:13 crc kubenswrapper[5107]: I0126 00:13:13.747981 5107 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"7354baf120344ed1dac52d3746c943295b30f546dee040fd19a9ac607e60408d"} pod="openshift-console/downloads-747b44746d-64rgr" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 26 00:13:13 crc kubenswrapper[5107]: I0126 00:13:13.748030 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-64rgr" podUID="4498876a-5953-499f-aa71-6899b8529dcf" containerName="download-server" containerID="cri-o://7354baf120344ed1dac52d3746c943295b30f546dee040fd19a9ac607e60408d" gracePeriod=2 Jan 26 00:13:13 crc kubenswrapper[5107]: I0126 00:13:13.748072 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-64rgr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 00:13:13 crc kubenswrapper[5107]: I0126 00:13:13.748173 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-64rgr" podUID="4498876a-5953-499f-aa71-6899b8529dcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 00:13:14 crc kubenswrapper[5107]: E0126 00:13:14.386376 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 34b27be6dfe10bb375bd115c261c824c99a969baadc2212ee2fc2f5d6a8d2cdd is running failed: container process not found" containerID="34b27be6dfe10bb375bd115c261c824c99a969baadc2212ee2fc2f5d6a8d2cdd" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:14 crc kubenswrapper[5107]: E0126 00:13:14.387730 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 34b27be6dfe10bb375bd115c261c824c99a969baadc2212ee2fc2f5d6a8d2cdd is running failed: container process not found" containerID="34b27be6dfe10bb375bd115c261c824c99a969baadc2212ee2fc2f5d6a8d2cdd" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:14 crc kubenswrapper[5107]: E0126 00:13:14.388193 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 34b27be6dfe10bb375bd115c261c824c99a969baadc2212ee2fc2f5d6a8d2cdd is running failed: container process not found" containerID="34b27be6dfe10bb375bd115c261c824c99a969baadc2212ee2fc2f5d6a8d2cdd" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:14 crc kubenswrapper[5107]: E0126 00:13:14.388334 5107 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 34b27be6dfe10bb375bd115c261c824c99a969baadc2212ee2fc2f5d6a8d2cdd is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-vk79k" podUID="8eedae47-54cd-438f-93d5-73b21a1fb540" containerName="registry-server" probeResult="unknown" Jan 26 00:13:15 crc kubenswrapper[5107]: I0126 00:13:15.983992 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2chhv" Jan 26 00:13:16 crc kubenswrapper[5107]: I0126 00:13:16.026467 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2chhv" Jan 26 00:13:16 crc kubenswrapper[5107]: I0126 00:13:16.099930 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lrc58" Jan 26 00:13:16 crc kubenswrapper[5107]: I0126 00:13:16.158777 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lrc58" Jan 26 00:13:17 crc kubenswrapper[5107]: I0126 00:13:17.094284 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65677c569c-qmptv"] Jan 26 00:13:17 crc kubenswrapper[5107]: I0126 00:13:17.094585 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" podUID="ce16ace8-c72f-4c8a-a0dc-1b101d66aad8" containerName="controller-manager" containerID="cri-o://ce2db7f715e78303467bbaa7d682554703bfd9f3ee99cb9bd4571e6402d3a2c8" gracePeriod=30 Jan 26 00:13:17 crc kubenswrapper[5107]: I0126 00:13:17.120710 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7"] Jan 26 00:13:17 crc kubenswrapper[5107]: I0126 00:13:17.125779 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7" podUID="36bdac93-b162-4ca6-bbb9-cde31af23bc6" containerName="route-controller-manager" containerID="cri-o://460cc97a8269de781160ea8cfe9e84a86566ba203d63008198c8e0081698739d" gracePeriod=30 Jan 26 00:13:17 crc kubenswrapper[5107]: I0126 00:13:17.908519 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vk79k" Jan 26 00:13:17 crc kubenswrapper[5107]: I0126 00:13:17.985723 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eedae47-54cd-438f-93d5-73b21a1fb540-utilities\") pod \"8eedae47-54cd-438f-93d5-73b21a1fb540\" (UID: \"8eedae47-54cd-438f-93d5-73b21a1fb540\") " Jan 26 00:13:17 crc kubenswrapper[5107]: I0126 00:13:17.985899 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eedae47-54cd-438f-93d5-73b21a1fb540-catalog-content\") pod \"8eedae47-54cd-438f-93d5-73b21a1fb540\" (UID: \"8eedae47-54cd-438f-93d5-73b21a1fb540\") " Jan 26 00:13:17 crc kubenswrapper[5107]: I0126 00:13:17.985964 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbczg\" (UniqueName: \"kubernetes.io/projected/8eedae47-54cd-438f-93d5-73b21a1fb540-kube-api-access-dbczg\") pod \"8eedae47-54cd-438f-93d5-73b21a1fb540\" (UID: \"8eedae47-54cd-438f-93d5-73b21a1fb540\") " Jan 26 00:13:17 crc kubenswrapper[5107]: I0126 00:13:17.988517 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eedae47-54cd-438f-93d5-73b21a1fb540-utilities" (OuterVolumeSpecName: "utilities") pod "8eedae47-54cd-438f-93d5-73b21a1fb540" (UID: "8eedae47-54cd-438f-93d5-73b21a1fb540"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:17 crc kubenswrapper[5107]: I0126 00:13:17.996709 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eedae47-54cd-438f-93d5-73b21a1fb540-kube-api-access-dbczg" (OuterVolumeSpecName: "kube-api-access-dbczg") pod "8eedae47-54cd-438f-93d5-73b21a1fb540" (UID: "8eedae47-54cd-438f-93d5-73b21a1fb540"). InnerVolumeSpecName "kube-api-access-dbczg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:18 crc kubenswrapper[5107]: I0126 00:13:18.000187 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eedae47-54cd-438f-93d5-73b21a1fb540-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8eedae47-54cd-438f-93d5-73b21a1fb540" (UID: "8eedae47-54cd-438f-93d5-73b21a1fb540"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:18 crc kubenswrapper[5107]: I0126 00:13:18.088030 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eedae47-54cd-438f-93d5-73b21a1fb540-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:18 crc kubenswrapper[5107]: I0126 00:13:18.088075 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dbczg\" (UniqueName: \"kubernetes.io/projected/8eedae47-54cd-438f-93d5-73b21a1fb540-kube-api-access-dbczg\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:18 crc kubenswrapper[5107]: I0126 00:13:18.088091 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eedae47-54cd-438f-93d5-73b21a1fb540-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:18 crc kubenswrapper[5107]: I0126 00:13:18.805318 5107 generic.go:358] "Generic (PLEG): container finished" podID="4498876a-5953-499f-aa71-6899b8529dcf" containerID="7354baf120344ed1dac52d3746c943295b30f546dee040fd19a9ac607e60408d" exitCode=0 Jan 26 00:13:18 crc kubenswrapper[5107]: I0126 00:13:18.805428 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-64rgr" event={"ID":"4498876a-5953-499f-aa71-6899b8529dcf","Type":"ContainerDied","Data":"7354baf120344ed1dac52d3746c943295b30f546dee040fd19a9ac607e60408d"} Jan 26 00:13:18 crc kubenswrapper[5107]: I0126 00:13:18.805927 5107 scope.go:117] "RemoveContainer" containerID="5041f82636ad9985627f247050509b672b7374047431fb966605ec3ca0acfb7d" Jan 26 00:13:19 crc kubenswrapper[5107]: I0126 00:13:19.732299 5107 patch_prober.go:28] interesting pod/controller-manager-65677c569c-qmptv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Jan 26 00:13:19 crc kubenswrapper[5107]: I0126 00:13:19.732407 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" podUID="ce16ace8-c72f-4c8a-a0dc-1b101d66aad8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Jan 26 00:13:19 crc kubenswrapper[5107]: I0126 00:13:19.732652 5107 patch_prober.go:28] interesting pod/route-controller-manager-86685bc4b9-sdjw7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Jan 26 00:13:19 crc kubenswrapper[5107]: I0126 00:13:19.732758 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7" podUID="36bdac93-b162-4ca6-bbb9-cde31af23bc6" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Jan 26 00:13:19 crc kubenswrapper[5107]: I0126 00:13:19.813067 5107 generic.go:358] "Generic (PLEG): container finished" podID="1d8cc2bf-c61e-4f0a-9bee-068919e02489" containerID="98d2032ad9fc86b0950204045ab6ddfadbe5bc9159f179a91a78da8692b7ceeb" exitCode=0 Jan 26 00:13:19 crc kubenswrapper[5107]: I0126 00:13:19.813182 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfr4w" event={"ID":"1d8cc2bf-c61e-4f0a-9bee-068919e02489","Type":"ContainerDied","Data":"98d2032ad9fc86b0950204045ab6ddfadbe5bc9159f179a91a78da8692b7ceeb"} Jan 26 00:13:19 crc kubenswrapper[5107]: I0126 00:13:19.815472 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vk79k" event={"ID":"8eedae47-54cd-438f-93d5-73b21a1fb540","Type":"ContainerDied","Data":"d72e9cb11069ac532eb5a40f0cd485fd60063c4b07e8630ec5717a9b4d48f3c0"} Jan 26 00:13:19 crc kubenswrapper[5107]: I0126 00:13:19.815591 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vk79k" Jan 26 00:13:19 crc kubenswrapper[5107]: I0126 00:13:19.817736 5107 generic.go:358] "Generic (PLEG): container finished" podID="638ad5ba-8cd0-49f3-817d-eb8c75ecc863" containerID="613771156e276340943201193d15d2878272cd5d48f7fbdf610dc106b79fd6ad" exitCode=0 Jan 26 00:13:19 crc kubenswrapper[5107]: I0126 00:13:19.817808 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zd5l8" event={"ID":"638ad5ba-8cd0-49f3-817d-eb8c75ecc863","Type":"ContainerDied","Data":"613771156e276340943201193d15d2878272cd5d48f7fbdf610dc106b79fd6ad"} Jan 26 00:13:19 crc kubenswrapper[5107]: I0126 00:13:19.837713 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vk79k"] Jan 26 00:13:19 crc kubenswrapper[5107]: I0126 00:13:19.842421 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vk79k"] Jan 26 00:13:20 crc kubenswrapper[5107]: I0126 00:13:20.054390 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zd5l8" Jan 26 00:13:20 crc kubenswrapper[5107]: I0126 00:13:20.118405 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/638ad5ba-8cd0-49f3-817d-eb8c75ecc863-utilities\") pod \"638ad5ba-8cd0-49f3-817d-eb8c75ecc863\" (UID: \"638ad5ba-8cd0-49f3-817d-eb8c75ecc863\") " Jan 26 00:13:20 crc kubenswrapper[5107]: I0126 00:13:20.118506 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47jzt\" (UniqueName: \"kubernetes.io/projected/638ad5ba-8cd0-49f3-817d-eb8c75ecc863-kube-api-access-47jzt\") pod \"638ad5ba-8cd0-49f3-817d-eb8c75ecc863\" (UID: \"638ad5ba-8cd0-49f3-817d-eb8c75ecc863\") " Jan 26 00:13:20 crc kubenswrapper[5107]: I0126 00:13:20.118577 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/638ad5ba-8cd0-49f3-817d-eb8c75ecc863-catalog-content\") pod \"638ad5ba-8cd0-49f3-817d-eb8c75ecc863\" (UID: \"638ad5ba-8cd0-49f3-817d-eb8c75ecc863\") " Jan 26 00:13:20 crc kubenswrapper[5107]: I0126 00:13:20.119594 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/638ad5ba-8cd0-49f3-817d-eb8c75ecc863-utilities" (OuterVolumeSpecName: "utilities") pod "638ad5ba-8cd0-49f3-817d-eb8c75ecc863" (UID: "638ad5ba-8cd0-49f3-817d-eb8c75ecc863"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:20 crc kubenswrapper[5107]: I0126 00:13:20.121978 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eedae47-54cd-438f-93d5-73b21a1fb540" path="/var/lib/kubelet/pods/8eedae47-54cd-438f-93d5-73b21a1fb540/volumes" Jan 26 00:13:20 crc kubenswrapper[5107]: I0126 00:13:20.127545 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/638ad5ba-8cd0-49f3-817d-eb8c75ecc863-kube-api-access-47jzt" (OuterVolumeSpecName: "kube-api-access-47jzt") pod "638ad5ba-8cd0-49f3-817d-eb8c75ecc863" (UID: "638ad5ba-8cd0-49f3-817d-eb8c75ecc863"). InnerVolumeSpecName "kube-api-access-47jzt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:20 crc kubenswrapper[5107]: I0126 00:13:20.220947 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/638ad5ba-8cd0-49f3-817d-eb8c75ecc863-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:20 crc kubenswrapper[5107]: I0126 00:13:20.221325 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-47jzt\" (UniqueName: \"kubernetes.io/projected/638ad5ba-8cd0-49f3-817d-eb8c75ecc863-kube-api-access-47jzt\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:20 crc kubenswrapper[5107]: I0126 00:13:20.517703 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lrc58"] Jan 26 00:13:20 crc kubenswrapper[5107]: I0126 00:13:20.518603 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lrc58" podUID="b89f5a05-23c2-41e1-98b3-22ba5035191f" containerName="registry-server" containerID="cri-o://8ca89379a16ead32839334d508c1e47635d9c0035647b3e218ae7d8eddfcacfe" gracePeriod=2 Jan 26 00:13:20 crc kubenswrapper[5107]: I0126 00:13:20.828312 5107 generic.go:358] "Generic (PLEG): container finished" podID="ce16ace8-c72f-4c8a-a0dc-1b101d66aad8" containerID="ce2db7f715e78303467bbaa7d682554703bfd9f3ee99cb9bd4571e6402d3a2c8" exitCode=0 Jan 26 00:13:20 crc kubenswrapper[5107]: I0126 00:13:20.828410 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" event={"ID":"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8","Type":"ContainerDied","Data":"ce2db7f715e78303467bbaa7d682554703bfd9f3ee99cb9bd4571e6402d3a2c8"} Jan 26 00:13:20 crc kubenswrapper[5107]: I0126 00:13:20.831233 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zd5l8" event={"ID":"638ad5ba-8cd0-49f3-817d-eb8c75ecc863","Type":"ContainerDied","Data":"73ca6e245f7a9d66718f51480359fd46212820a9f4f0f2cb0dd9a22f6303951c"} Jan 26 00:13:20 crc kubenswrapper[5107]: I0126 00:13:20.831334 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zd5l8" Jan 26 00:13:21 crc kubenswrapper[5107]: I0126 00:13:21.260338 5107 scope.go:117] "RemoveContainer" containerID="34b27be6dfe10bb375bd115c261c824c99a969baadc2212ee2fc2f5d6a8d2cdd" Jan 26 00:13:21 crc kubenswrapper[5107]: I0126 00:13:21.301754 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bfr4w" Jan 26 00:13:21 crc kubenswrapper[5107]: I0126 00:13:21.307298 5107 scope.go:117] "RemoveContainer" containerID="607dd323faded728597c15559c17dfe1ff5ad380c36ed01979600857cc2d4938" Jan 26 00:13:21 crc kubenswrapper[5107]: I0126 00:13:21.322905 5107 scope.go:117] "RemoveContainer" containerID="e45ce3e8de11079c80369688511d164b5b1f995f0e203ff2a4ac5615bb259b19" Jan 26 00:13:21 crc kubenswrapper[5107]: I0126 00:13:21.345446 5107 scope.go:117] "RemoveContainer" containerID="613771156e276340943201193d15d2878272cd5d48f7fbdf610dc106b79fd6ad" Jan 26 00:13:21 crc kubenswrapper[5107]: I0126 00:13:21.366513 5107 scope.go:117] "RemoveContainer" containerID="6cd8605d50084b36a5d6ceaac6fedc96774ec8fdae1ca0972e18406e170ba31b" Jan 26 00:13:21 crc kubenswrapper[5107]: I0126 00:13:21.389261 5107 scope.go:117] "RemoveContainer" containerID="c24fd2a96fbda832e078de2f17825928ccd2a840c6e8654990aeb7ce9549f1c6" Jan 26 00:13:21 crc kubenswrapper[5107]: I0126 00:13:21.444304 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8ltq\" (UniqueName: \"kubernetes.io/projected/1d8cc2bf-c61e-4f0a-9bee-068919e02489-kube-api-access-f8ltq\") pod \"1d8cc2bf-c61e-4f0a-9bee-068919e02489\" (UID: \"1d8cc2bf-c61e-4f0a-9bee-068919e02489\") " Jan 26 00:13:21 crc kubenswrapper[5107]: I0126 00:13:21.444395 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d8cc2bf-c61e-4f0a-9bee-068919e02489-utilities\") pod \"1d8cc2bf-c61e-4f0a-9bee-068919e02489\" (UID: \"1d8cc2bf-c61e-4f0a-9bee-068919e02489\") " Jan 26 00:13:21 crc kubenswrapper[5107]: I0126 00:13:21.444633 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d8cc2bf-c61e-4f0a-9bee-068919e02489-catalog-content\") pod \"1d8cc2bf-c61e-4f0a-9bee-068919e02489\" (UID: \"1d8cc2bf-c61e-4f0a-9bee-068919e02489\") " Jan 26 00:13:21 crc kubenswrapper[5107]: I0126 00:13:21.445553 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d8cc2bf-c61e-4f0a-9bee-068919e02489-utilities" (OuterVolumeSpecName: "utilities") pod "1d8cc2bf-c61e-4f0a-9bee-068919e02489" (UID: "1d8cc2bf-c61e-4f0a-9bee-068919e02489"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:21 crc kubenswrapper[5107]: I0126 00:13:21.451435 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d8cc2bf-c61e-4f0a-9bee-068919e02489-kube-api-access-f8ltq" (OuterVolumeSpecName: "kube-api-access-f8ltq") pod "1d8cc2bf-c61e-4f0a-9bee-068919e02489" (UID: "1d8cc2bf-c61e-4f0a-9bee-068919e02489"). InnerVolumeSpecName "kube-api-access-f8ltq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:21 crc kubenswrapper[5107]: I0126 00:13:21.477690 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d8cc2bf-c61e-4f0a-9bee-068919e02489-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d8cc2bf-c61e-4f0a-9bee-068919e02489" (UID: "1d8cc2bf-c61e-4f0a-9bee-068919e02489"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:21 crc kubenswrapper[5107]: I0126 00:13:21.546356 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f8ltq\" (UniqueName: \"kubernetes.io/projected/1d8cc2bf-c61e-4f0a-9bee-068919e02489-kube-api-access-f8ltq\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:21 crc kubenswrapper[5107]: I0126 00:13:21.546401 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d8cc2bf-c61e-4f0a-9bee-068919e02489-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:21 crc kubenswrapper[5107]: I0126 00:13:21.546416 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d8cc2bf-c61e-4f0a-9bee-068919e02489-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:21 crc kubenswrapper[5107]: I0126 00:13:21.840559 5107 generic.go:358] "Generic (PLEG): container finished" podID="36bdac93-b162-4ca6-bbb9-cde31af23bc6" containerID="460cc97a8269de781160ea8cfe9e84a86566ba203d63008198c8e0081698739d" exitCode=0 Jan 26 00:13:21 crc kubenswrapper[5107]: I0126 00:13:21.840673 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7" event={"ID":"36bdac93-b162-4ca6-bbb9-cde31af23bc6","Type":"ContainerDied","Data":"460cc97a8269de781160ea8cfe9e84a86566ba203d63008198c8e0081698739d"} Jan 26 00:13:21 crc kubenswrapper[5107]: I0126 00:13:21.845344 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfr4w" event={"ID":"1d8cc2bf-c61e-4f0a-9bee-068919e02489","Type":"ContainerDied","Data":"5e960997a7e73fcdc0e599e98291f1239ecd90d5f9e187ebf70745e839fe22cb"} Jan 26 00:13:21 crc kubenswrapper[5107]: I0126 00:13:21.845414 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bfr4w" Jan 26 00:13:21 crc kubenswrapper[5107]: I0126 00:13:21.845417 5107 scope.go:117] "RemoveContainer" containerID="98d2032ad9fc86b0950204045ab6ddfadbe5bc9159f179a91a78da8692b7ceeb" Jan 26 00:13:21 crc kubenswrapper[5107]: I0126 00:13:21.882532 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bfr4w"] Jan 26 00:13:21 crc kubenswrapper[5107]: I0126 00:13:21.885706 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bfr4w"] Jan 26 00:13:21 crc kubenswrapper[5107]: I0126 00:13:21.989856 5107 scope.go:117] "RemoveContainer" containerID="789381cf7ab635512a71d79fcd604d08479c2f8b35a19ac1f3b72d38ecd77b6c" Jan 26 00:13:22 crc kubenswrapper[5107]: I0126 00:13:22.007195 5107 scope.go:117] "RemoveContainer" containerID="2b5cef2d39ddcb877f318305057949c6adad31a6831afd0c1ef7e32cb5908114" Jan 26 00:13:22 crc kubenswrapper[5107]: I0126 00:13:22.931800 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d8cc2bf-c61e-4f0a-9bee-068919e02489" path="/var/lib/kubelet/pods/1d8cc2bf-c61e-4f0a-9bee-068919e02489/volumes" Jan 26 00:13:22 crc kubenswrapper[5107]: I0126 00:13:22.978296 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-wsw2x"] Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.084324 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/638ad5ba-8cd0-49f3-817d-eb8c75ecc863-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "638ad5ba-8cd0-49f3-817d-eb8c75ecc863" (UID: "638ad5ba-8cd0-49f3-817d-eb8c75ecc863"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.176914 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/638ad5ba-8cd0-49f3-817d-eb8c75ecc863-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.265301 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zd5l8"] Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.270716 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zd5l8"] Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.395027 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.435581 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6"] Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.436315 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="638ad5ba-8cd0-49f3-817d-eb8c75ecc863" containerName="extract-content" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.436337 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="638ad5ba-8cd0-49f3-817d-eb8c75ecc863" containerName="extract-content" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.436345 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8eedae47-54cd-438f-93d5-73b21a1fb540" containerName="extract-content" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.436352 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eedae47-54cd-438f-93d5-73b21a1fb540" containerName="extract-content" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.436364 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="638ad5ba-8cd0-49f3-817d-eb8c75ecc863" containerName="extract-utilities" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.436371 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="638ad5ba-8cd0-49f3-817d-eb8c75ecc863" containerName="extract-utilities" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.436380 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1d8cc2bf-c61e-4f0a-9bee-068919e02489" containerName="extract-content" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.436387 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8cc2bf-c61e-4f0a-9bee-068919e02489" containerName="extract-content" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.436394 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1d8cc2bf-c61e-4f0a-9bee-068919e02489" containerName="registry-server" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.436399 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8cc2bf-c61e-4f0a-9bee-068919e02489" containerName="registry-server" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.436412 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="36bdac93-b162-4ca6-bbb9-cde31af23bc6" containerName="route-controller-manager" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.436417 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="36bdac93-b162-4ca6-bbb9-cde31af23bc6" containerName="route-controller-manager" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.436424 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5838a86a-169c-4e09-85d9-25b6e7ee17bb" containerName="pruner" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.436429 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="5838a86a-169c-4e09-85d9-25b6e7ee17bb" containerName="pruner" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.436439 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8eedae47-54cd-438f-93d5-73b21a1fb540" containerName="registry-server" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.436444 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eedae47-54cd-438f-93d5-73b21a1fb540" containerName="registry-server" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.436458 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8eedae47-54cd-438f-93d5-73b21a1fb540" containerName="extract-utilities" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.436463 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eedae47-54cd-438f-93d5-73b21a1fb540" containerName="extract-utilities" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.436469 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="638ad5ba-8cd0-49f3-817d-eb8c75ecc863" containerName="registry-server" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.436474 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="638ad5ba-8cd0-49f3-817d-eb8c75ecc863" containerName="registry-server" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.436492 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1d8cc2bf-c61e-4f0a-9bee-068919e02489" containerName="extract-utilities" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.436498 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8cc2bf-c61e-4f0a-9bee-068919e02489" containerName="extract-utilities" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.436622 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="5838a86a-169c-4e09-85d9-25b6e7ee17bb" containerName="pruner" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.436640 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="1d8cc2bf-c61e-4f0a-9bee-068919e02489" containerName="registry-server" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.436655 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="8eedae47-54cd-438f-93d5-73b21a1fb540" containerName="registry-server" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.436662 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="638ad5ba-8cd0-49f3-817d-eb8c75ecc863" containerName="registry-server" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.436671 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="36bdac93-b162-4ca6-bbb9-cde31af23bc6" containerName="route-controller-manager" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.482308 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36bdac93-b162-4ca6-bbb9-cde31af23bc6-serving-cert\") pod \"36bdac93-b162-4ca6-bbb9-cde31af23bc6\" (UID: \"36bdac93-b162-4ca6-bbb9-cde31af23bc6\") " Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.482377 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36bdac93-b162-4ca6-bbb9-cde31af23bc6-client-ca\") pod \"36bdac93-b162-4ca6-bbb9-cde31af23bc6\" (UID: \"36bdac93-b162-4ca6-bbb9-cde31af23bc6\") " Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.482425 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36bdac93-b162-4ca6-bbb9-cde31af23bc6-config\") pod \"36bdac93-b162-4ca6-bbb9-cde31af23bc6\" (UID: \"36bdac93-b162-4ca6-bbb9-cde31af23bc6\") " Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.482566 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nktb\" (UniqueName: \"kubernetes.io/projected/36bdac93-b162-4ca6-bbb9-cde31af23bc6-kube-api-access-5nktb\") pod \"36bdac93-b162-4ca6-bbb9-cde31af23bc6\" (UID: \"36bdac93-b162-4ca6-bbb9-cde31af23bc6\") " Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.482662 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/36bdac93-b162-4ca6-bbb9-cde31af23bc6-tmp\") pod \"36bdac93-b162-4ca6-bbb9-cde31af23bc6\" (UID: \"36bdac93-b162-4ca6-bbb9-cde31af23bc6\") " Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.483023 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36bdac93-b162-4ca6-bbb9-cde31af23bc6-tmp" (OuterVolumeSpecName: "tmp") pod "36bdac93-b162-4ca6-bbb9-cde31af23bc6" (UID: "36bdac93-b162-4ca6-bbb9-cde31af23bc6"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.483576 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36bdac93-b162-4ca6-bbb9-cde31af23bc6-config" (OuterVolumeSpecName: "config") pod "36bdac93-b162-4ca6-bbb9-cde31af23bc6" (UID: "36bdac93-b162-4ca6-bbb9-cde31af23bc6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.483769 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36bdac93-b162-4ca6-bbb9-cde31af23bc6-client-ca" (OuterVolumeSpecName: "client-ca") pod "36bdac93-b162-4ca6-bbb9-cde31af23bc6" (UID: "36bdac93-b162-4ca6-bbb9-cde31af23bc6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.496178 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36bdac93-b162-4ca6-bbb9-cde31af23bc6-kube-api-access-5nktb" (OuterVolumeSpecName: "kube-api-access-5nktb") pod "36bdac93-b162-4ca6-bbb9-cde31af23bc6" (UID: "36bdac93-b162-4ca6-bbb9-cde31af23bc6"). InnerVolumeSpecName "kube-api-access-5nktb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.511694 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36bdac93-b162-4ca6-bbb9-cde31af23bc6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "36bdac93-b162-4ca6-bbb9-cde31af23bc6" (UID: "36bdac93-b162-4ca6-bbb9-cde31af23bc6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.584032 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5nktb\" (UniqueName: \"kubernetes.io/projected/36bdac93-b162-4ca6-bbb9-cde31af23bc6-kube-api-access-5nktb\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.584067 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/36bdac93-b162-4ca6-bbb9-cde31af23bc6-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.584077 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36bdac93-b162-4ca6-bbb9-cde31af23bc6-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.584087 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36bdac93-b162-4ca6-bbb9-cde31af23bc6-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.584097 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36bdac93-b162-4ca6-bbb9-cde31af23bc6-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.612431 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.685704 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-serving-cert\") pod \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\" (UID: \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\") " Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.685805 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-client-ca\") pod \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\" (UID: \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\") " Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.685855 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-tmp\") pod \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\" (UID: \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\") " Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.686036 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-config\") pod \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\" (UID: \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\") " Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.686089 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-proxy-ca-bundles\") pod \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\" (UID: \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\") " Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.686120 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b47m\" (UniqueName: \"kubernetes.io/projected/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-kube-api-access-2b47m\") pod \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\" (UID: \"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8\") " Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.687170 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-client-ca" (OuterVolumeSpecName: "client-ca") pod "ce16ace8-c72f-4c8a-a0dc-1b101d66aad8" (UID: "ce16ace8-c72f-4c8a-a0dc-1b101d66aad8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.687416 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-tmp" (OuterVolumeSpecName: "tmp") pod "ce16ace8-c72f-4c8a-a0dc-1b101d66aad8" (UID: "ce16ace8-c72f-4c8a-a0dc-1b101d66aad8"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.688068 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ce16ace8-c72f-4c8a-a0dc-1b101d66aad8" (UID: "ce16ace8-c72f-4c8a-a0dc-1b101d66aad8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.688145 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-config" (OuterVolumeSpecName: "config") pod "ce16ace8-c72f-4c8a-a0dc-1b101d66aad8" (UID: "ce16ace8-c72f-4c8a-a0dc-1b101d66aad8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.691084 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-kube-api-access-2b47m" (OuterVolumeSpecName: "kube-api-access-2b47m") pod "ce16ace8-c72f-4c8a-a0dc-1b101d66aad8" (UID: "ce16ace8-c72f-4c8a-a0dc-1b101d66aad8"). InnerVolumeSpecName "kube-api-access-2b47m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.691228 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ce16ace8-c72f-4c8a-a0dc-1b101d66aad8" (UID: "ce16ace8-c72f-4c8a-a0dc-1b101d66aad8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.748229 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-64rgr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.748356 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-64rgr" podUID="4498876a-5953-499f-aa71-6899b8529dcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.788150 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.788202 5107 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.788215 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2b47m\" (UniqueName: \"kubernetes.io/projected/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-kube-api-access-2b47m\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.788230 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.788242 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.788254 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.880273 5107 generic.go:358] "Generic (PLEG): container finished" podID="b89f5a05-23c2-41e1-98b3-22ba5035191f" containerID="8ca89379a16ead32839334d508c1e47635d9c0035647b3e218ae7d8eddfcacfe" exitCode=0 Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.952647 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7" event={"ID":"36bdac93-b162-4ca6-bbb9-cde31af23bc6","Type":"ContainerDied","Data":"d9a9c9385c69e132e3ed9095a5b25416ff219e8b65dd6397fec93a5422ad12fa"} Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.952749 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-64rgr" event={"ID":"4498876a-5953-499f-aa71-6899b8529dcf","Type":"ContainerStarted","Data":"be2cd8a514e0d353390b2c7280e8dc1ff447bd69f86e37becc8d4bfa3ded69f9"} Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.952776 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" event={"ID":"ce16ace8-c72f-4c8a-a0dc-1b101d66aad8","Type":"ContainerDied","Data":"c81ce04f19c6b075c36b0d1a80c8265905fb82a791f78a4d52cce8c48b50f01b"} Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.952793 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrc58" event={"ID":"b89f5a05-23c2-41e1-98b3-22ba5035191f","Type":"ContainerDied","Data":"8ca89379a16ead32839334d508c1e47635d9c0035647b3e218ae7d8eddfcacfe"} Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.952832 5107 scope.go:117] "RemoveContainer" containerID="460cc97a8269de781160ea8cfe9e84a86566ba203d63008198c8e0081698739d" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.952858 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65677c569c-qmptv" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.952942 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.952973 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6"] Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.953078 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-b5489864b-xgrwf"] Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.952781 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.954941 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ce16ace8-c72f-4c8a-a0dc-1b101d66aad8" containerName="controller-manager" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.955555 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce16ace8-c72f-4c8a-a0dc-1b101d66aad8" containerName="controller-manager" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.955810 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="ce16ace8-c72f-4c8a-a0dc-1b101d66aad8" containerName="controller-manager" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.960538 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.960818 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.962922 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.963480 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.963794 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 26 00:13:23 crc kubenswrapper[5107]: I0126 00:13:23.964039 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:23.999868 5107 scope.go:117] "RemoveContainer" containerID="ce2db7f715e78303467bbaa7d682554703bfd9f3ee99cb9bd4571e6402d3a2c8" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.034009 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b5489864b-xgrwf"] Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.034063 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65677c569c-qmptv"] Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.034081 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65677c569c-qmptv"] Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.034148 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7"] Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.034167 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86685bc4b9-sdjw7"] Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.034394 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.036724 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.042849 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.043817 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.043994 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.044015 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.044399 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.051314 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.094465 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42760d54-52a5-4718-966c-b35aae39b112-client-ca\") pod \"route-controller-manager-76959bf66b-7tfq6\" (UID: \"42760d54-52a5-4718-966c-b35aae39b112\") " pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.094575 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/42760d54-52a5-4718-966c-b35aae39b112-tmp\") pod \"route-controller-manager-76959bf66b-7tfq6\" (UID: \"42760d54-52a5-4718-966c-b35aae39b112\") " pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.094714 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngpxk\" (UniqueName: \"kubernetes.io/projected/42760d54-52a5-4718-966c-b35aae39b112-kube-api-access-ngpxk\") pod \"route-controller-manager-76959bf66b-7tfq6\" (UID: \"42760d54-52a5-4718-966c-b35aae39b112\") " pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.095602 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42760d54-52a5-4718-966c-b35aae39b112-config\") pod \"route-controller-manager-76959bf66b-7tfq6\" (UID: \"42760d54-52a5-4718-966c-b35aae39b112\") " pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.095727 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42760d54-52a5-4718-966c-b35aae39b112-serving-cert\") pod \"route-controller-manager-76959bf66b-7tfq6\" (UID: \"42760d54-52a5-4718-966c-b35aae39b112\") " pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.122094 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36bdac93-b162-4ca6-bbb9-cde31af23bc6" path="/var/lib/kubelet/pods/36bdac93-b162-4ca6-bbb9-cde31af23bc6/volumes" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.122710 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="638ad5ba-8cd0-49f3-817d-eb8c75ecc863" path="/var/lib/kubelet/pods/638ad5ba-8cd0-49f3-817d-eb8c75ecc863/volumes" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.123785 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce16ace8-c72f-4c8a-a0dc-1b101d66aad8" path="/var/lib/kubelet/pods/ce16ace8-c72f-4c8a-a0dc-1b101d66aad8/volumes" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.197385 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b083abe4-5d92-474f-bc10-63c8174bb862-client-ca\") pod \"controller-manager-b5489864b-xgrwf\" (UID: \"b083abe4-5d92-474f-bc10-63c8174bb862\") " pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.197458 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b083abe4-5d92-474f-bc10-63c8174bb862-config\") pod \"controller-manager-b5489864b-xgrwf\" (UID: \"b083abe4-5d92-474f-bc10-63c8174bb862\") " pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.197477 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b083abe4-5d92-474f-bc10-63c8174bb862-proxy-ca-bundles\") pod \"controller-manager-b5489864b-xgrwf\" (UID: \"b083abe4-5d92-474f-bc10-63c8174bb862\") " pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.197518 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42760d54-52a5-4718-966c-b35aae39b112-client-ca\") pod \"route-controller-manager-76959bf66b-7tfq6\" (UID: \"42760d54-52a5-4718-966c-b35aae39b112\") " pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.197547 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/42760d54-52a5-4718-966c-b35aae39b112-tmp\") pod \"route-controller-manager-76959bf66b-7tfq6\" (UID: \"42760d54-52a5-4718-966c-b35aae39b112\") " pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.197592 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ngpxk\" (UniqueName: \"kubernetes.io/projected/42760d54-52a5-4718-966c-b35aae39b112-kube-api-access-ngpxk\") pod \"route-controller-manager-76959bf66b-7tfq6\" (UID: \"42760d54-52a5-4718-966c-b35aae39b112\") " pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.197613 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hxt6\" (UniqueName: \"kubernetes.io/projected/b083abe4-5d92-474f-bc10-63c8174bb862-kube-api-access-6hxt6\") pod \"controller-manager-b5489864b-xgrwf\" (UID: \"b083abe4-5d92-474f-bc10-63c8174bb862\") " pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.197653 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42760d54-52a5-4718-966c-b35aae39b112-config\") pod \"route-controller-manager-76959bf66b-7tfq6\" (UID: \"42760d54-52a5-4718-966c-b35aae39b112\") " pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.197673 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b083abe4-5d92-474f-bc10-63c8174bb862-tmp\") pod \"controller-manager-b5489864b-xgrwf\" (UID: \"b083abe4-5d92-474f-bc10-63c8174bb862\") " pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.197693 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42760d54-52a5-4718-966c-b35aae39b112-serving-cert\") pod \"route-controller-manager-76959bf66b-7tfq6\" (UID: \"42760d54-52a5-4718-966c-b35aae39b112\") " pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.197737 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b083abe4-5d92-474f-bc10-63c8174bb862-serving-cert\") pod \"controller-manager-b5489864b-xgrwf\" (UID: \"b083abe4-5d92-474f-bc10-63c8174bb862\") " pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.200150 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42760d54-52a5-4718-966c-b35aae39b112-client-ca\") pod \"route-controller-manager-76959bf66b-7tfq6\" (UID: \"42760d54-52a5-4718-966c-b35aae39b112\") " pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.200469 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/42760d54-52a5-4718-966c-b35aae39b112-tmp\") pod \"route-controller-manager-76959bf66b-7tfq6\" (UID: \"42760d54-52a5-4718-966c-b35aae39b112\") " pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.200498 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42760d54-52a5-4718-966c-b35aae39b112-config\") pod \"route-controller-manager-76959bf66b-7tfq6\" (UID: \"42760d54-52a5-4718-966c-b35aae39b112\") " pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.206631 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42760d54-52a5-4718-966c-b35aae39b112-serving-cert\") pod \"route-controller-manager-76959bf66b-7tfq6\" (UID: \"42760d54-52a5-4718-966c-b35aae39b112\") " pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.223154 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngpxk\" (UniqueName: \"kubernetes.io/projected/42760d54-52a5-4718-966c-b35aae39b112-kube-api-access-ngpxk\") pod \"route-controller-manager-76959bf66b-7tfq6\" (UID: \"42760d54-52a5-4718-966c-b35aae39b112\") " pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.232361 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lrc58" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.285469 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.299070 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b083abe4-5d92-474f-bc10-63c8174bb862-config\") pod \"controller-manager-b5489864b-xgrwf\" (UID: \"b083abe4-5d92-474f-bc10-63c8174bb862\") " pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.299138 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b083abe4-5d92-474f-bc10-63c8174bb862-proxy-ca-bundles\") pod \"controller-manager-b5489864b-xgrwf\" (UID: \"b083abe4-5d92-474f-bc10-63c8174bb862\") " pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.299224 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6hxt6\" (UniqueName: \"kubernetes.io/projected/b083abe4-5d92-474f-bc10-63c8174bb862-kube-api-access-6hxt6\") pod \"controller-manager-b5489864b-xgrwf\" (UID: \"b083abe4-5d92-474f-bc10-63c8174bb862\") " pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.299274 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b083abe4-5d92-474f-bc10-63c8174bb862-tmp\") pod \"controller-manager-b5489864b-xgrwf\" (UID: \"b083abe4-5d92-474f-bc10-63c8174bb862\") " pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.299317 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b083abe4-5d92-474f-bc10-63c8174bb862-serving-cert\") pod \"controller-manager-b5489864b-xgrwf\" (UID: \"b083abe4-5d92-474f-bc10-63c8174bb862\") " pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.299448 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b083abe4-5d92-474f-bc10-63c8174bb862-client-ca\") pod \"controller-manager-b5489864b-xgrwf\" (UID: \"b083abe4-5d92-474f-bc10-63c8174bb862\") " pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.300699 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b083abe4-5d92-474f-bc10-63c8174bb862-client-ca\") pod \"controller-manager-b5489864b-xgrwf\" (UID: \"b083abe4-5d92-474f-bc10-63c8174bb862\") " pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.302144 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b083abe4-5d92-474f-bc10-63c8174bb862-config\") pod \"controller-manager-b5489864b-xgrwf\" (UID: \"b083abe4-5d92-474f-bc10-63c8174bb862\") " pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.303169 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b083abe4-5d92-474f-bc10-63c8174bb862-tmp\") pod \"controller-manager-b5489864b-xgrwf\" (UID: \"b083abe4-5d92-474f-bc10-63c8174bb862\") " pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.304469 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b083abe4-5d92-474f-bc10-63c8174bb862-proxy-ca-bundles\") pod \"controller-manager-b5489864b-xgrwf\" (UID: \"b083abe4-5d92-474f-bc10-63c8174bb862\") " pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.308546 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b083abe4-5d92-474f-bc10-63c8174bb862-serving-cert\") pod \"controller-manager-b5489864b-xgrwf\" (UID: \"b083abe4-5d92-474f-bc10-63c8174bb862\") " pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.324571 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hxt6\" (UniqueName: \"kubernetes.io/projected/b083abe4-5d92-474f-bc10-63c8174bb862-kube-api-access-6hxt6\") pod \"controller-manager-b5489864b-xgrwf\" (UID: \"b083abe4-5d92-474f-bc10-63c8174bb862\") " pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.359847 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.400158 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b89f5a05-23c2-41e1-98b3-22ba5035191f-utilities\") pod \"b89f5a05-23c2-41e1-98b3-22ba5035191f\" (UID: \"b89f5a05-23c2-41e1-98b3-22ba5035191f\") " Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.400707 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqnzf\" (UniqueName: \"kubernetes.io/projected/b89f5a05-23c2-41e1-98b3-22ba5035191f-kube-api-access-gqnzf\") pod \"b89f5a05-23c2-41e1-98b3-22ba5035191f\" (UID: \"b89f5a05-23c2-41e1-98b3-22ba5035191f\") " Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.400765 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b89f5a05-23c2-41e1-98b3-22ba5035191f-catalog-content\") pod \"b89f5a05-23c2-41e1-98b3-22ba5035191f\" (UID: \"b89f5a05-23c2-41e1-98b3-22ba5035191f\") " Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.402529 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b89f5a05-23c2-41e1-98b3-22ba5035191f-utilities" (OuterVolumeSpecName: "utilities") pod "b89f5a05-23c2-41e1-98b3-22ba5035191f" (UID: "b89f5a05-23c2-41e1-98b3-22ba5035191f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.402789 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b89f5a05-23c2-41e1-98b3-22ba5035191f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.411265 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b89f5a05-23c2-41e1-98b3-22ba5035191f-kube-api-access-gqnzf" (OuterVolumeSpecName: "kube-api-access-gqnzf") pod "b89f5a05-23c2-41e1-98b3-22ba5035191f" (UID: "b89f5a05-23c2-41e1-98b3-22ba5035191f"). InnerVolumeSpecName "kube-api-access-gqnzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.505552 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gqnzf\" (UniqueName: \"kubernetes.io/projected/b89f5a05-23c2-41e1-98b3-22ba5035191f-kube-api-access-gqnzf\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.522365 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6"] Jan 26 00:13:24 crc kubenswrapper[5107]: W0126 00:13:24.536191 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42760d54_52a5_4718_966c_b35aae39b112.slice/crio-0b621b3f8177f2cd00e8a9ea14e9cf1ff53ac76048ae7ea42dbd94ed892a3b0b WatchSource:0}: Error finding container 0b621b3f8177f2cd00e8a9ea14e9cf1ff53ac76048ae7ea42dbd94ed892a3b0b: Status 404 returned error can't find the container with id 0b621b3f8177f2cd00e8a9ea14e9cf1ff53ac76048ae7ea42dbd94ed892a3b0b Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.566685 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b89f5a05-23c2-41e1-98b3-22ba5035191f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b89f5a05-23c2-41e1-98b3-22ba5035191f" (UID: "b89f5a05-23c2-41e1-98b3-22ba5035191f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.607049 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b89f5a05-23c2-41e1-98b3-22ba5035191f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.827023 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b5489864b-xgrwf"] Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.907292 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrc58" event={"ID":"b89f5a05-23c2-41e1-98b3-22ba5035191f","Type":"ContainerDied","Data":"af992c4a8cb69cc4e1c03164c427346fef1cb300d461623ffc59763fe34d615f"} Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.907386 5107 scope.go:117] "RemoveContainer" containerID="8ca89379a16ead32839334d508c1e47635d9c0035647b3e218ae7d8eddfcacfe" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.907626 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lrc58" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.916638 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" event={"ID":"b083abe4-5d92-474f-bc10-63c8174bb862","Type":"ContainerStarted","Data":"aa9fd07f4ffdf591fd10b13377ab8930e14a436e039e4beb959af11472b37bb5"} Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.919168 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" event={"ID":"42760d54-52a5-4718-966c-b35aae39b112","Type":"ContainerStarted","Data":"0b621b3f8177f2cd00e8a9ea14e9cf1ff53ac76048ae7ea42dbd94ed892a3b0b"} Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.923120 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-64rgr" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.923390 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-64rgr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.923442 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-64rgr" podUID="4498876a-5953-499f-aa71-6899b8529dcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.947065 5107 scope.go:117] "RemoveContainer" containerID="31b587e42148a5a04645b44e0e7d20cf4d7c82e7b74a9e124a749dd19f354a23" Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.969190 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lrc58"] Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.971716 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lrc58"] Jan 26 00:13:24 crc kubenswrapper[5107]: I0126 00:13:24.983035 5107 scope.go:117] "RemoveContainer" containerID="41f37484c14c6adb45db4c8392fa438acd69206f1bb007e3285c7bb2b3aebb4a" Jan 26 00:13:25 crc kubenswrapper[5107]: I0126 00:13:25.930112 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" event={"ID":"b083abe4-5d92-474f-bc10-63c8174bb862","Type":"ContainerStarted","Data":"d1dd9e26c4427a4b920e2e6811bc593aaf830cd47c08de0d02b4f83ce6855981"} Jan 26 00:13:25 crc kubenswrapper[5107]: I0126 00:13:25.930724 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" Jan 26 00:13:25 crc kubenswrapper[5107]: I0126 00:13:25.935278 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" event={"ID":"42760d54-52a5-4718-966c-b35aae39b112","Type":"ContainerStarted","Data":"1c7b0e4475d26d668a3a3868c99d1105483fee0ee34e084ff2b63aef785bc1f3"} Jan 26 00:13:25 crc kubenswrapper[5107]: I0126 00:13:25.935717 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" Jan 26 00:13:25 crc kubenswrapper[5107]: I0126 00:13:25.945084 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-64rgr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 00:13:25 crc kubenswrapper[5107]: I0126 00:13:25.945199 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-64rgr" podUID="4498876a-5953-499f-aa71-6899b8529dcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 00:13:25 crc kubenswrapper[5107]: I0126 00:13:25.960797 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" podStartSLOduration=8.960767361 podStartE2EDuration="8.960767361s" podCreationTimestamp="2026-01-26 00:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:13:25.960575905 +0000 UTC m=+250.878170251" watchObservedRunningTime="2026-01-26 00:13:25.960767361 +0000 UTC m=+250.878361707" Jan 26 00:13:25 crc kubenswrapper[5107]: I0126 00:13:25.991500 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" podStartSLOduration=8.99147446 podStartE2EDuration="8.99147446s" podCreationTimestamp="2026-01-26 00:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:13:25.989847543 +0000 UTC m=+250.907441879" watchObservedRunningTime="2026-01-26 00:13:25.99147446 +0000 UTC m=+250.909068806" Jan 26 00:13:26 crc kubenswrapper[5107]: I0126 00:13:26.108440 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" Jan 26 00:13:26 crc kubenswrapper[5107]: I0126 00:13:26.121921 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b89f5a05-23c2-41e1-98b3-22ba5035191f" path="/var/lib/kubelet/pods/b89f5a05-23c2-41e1-98b3-22ba5035191f/volumes" Jan 26 00:13:26 crc kubenswrapper[5107]: I0126 00:13:26.931767 5107 patch_prober.go:28] interesting pod/controller-manager-b5489864b-xgrwf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": context deadline exceeded" start-of-body= Jan 26 00:13:26 crc kubenswrapper[5107]: I0126 00:13:26.931872 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": context deadline exceeded" Jan 26 00:13:26 crc kubenswrapper[5107]: I0126 00:13:26.948709 5107 generic.go:358] "Generic (PLEG): container finished" podID="42d6fb86-e6fd-4b77-b921-d62cd5b6e825" containerID="25c753829a3ea671654968aa55a9066bc67cd29dcef2e5f1416e65017e329236" exitCode=0 Jan 26 00:13:26 crc kubenswrapper[5107]: I0126 00:13:26.948824 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29489760-jn9bq" event={"ID":"42d6fb86-e6fd-4b77-b921-d62cd5b6e825","Type":"ContainerDied","Data":"25c753829a3ea671654968aa55a9066bc67cd29dcef2e5f1416e65017e329236"} Jan 26 00:13:27 crc kubenswrapper[5107]: I0126 00:13:27.949453 5107 patch_prober.go:28] interesting pod/controller-manager-b5489864b-xgrwf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 00:13:27 crc kubenswrapper[5107]: I0126 00:13:27.949583 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 00:13:28 crc kubenswrapper[5107]: I0126 00:13:28.206822 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29489760-jn9bq" Jan 26 00:13:28 crc kubenswrapper[5107]: I0126 00:13:28.372223 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26dbv\" (UniqueName: \"kubernetes.io/projected/42d6fb86-e6fd-4b77-b921-d62cd5b6e825-kube-api-access-26dbv\") pod \"42d6fb86-e6fd-4b77-b921-d62cd5b6e825\" (UID: \"42d6fb86-e6fd-4b77-b921-d62cd5b6e825\") " Jan 26 00:13:28 crc kubenswrapper[5107]: I0126 00:13:28.373256 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/42d6fb86-e6fd-4b77-b921-d62cd5b6e825-serviceca\") pod \"42d6fb86-e6fd-4b77-b921-d62cd5b6e825\" (UID: \"42d6fb86-e6fd-4b77-b921-d62cd5b6e825\") " Jan 26 00:13:28 crc kubenswrapper[5107]: I0126 00:13:28.374341 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42d6fb86-e6fd-4b77-b921-d62cd5b6e825-serviceca" (OuterVolumeSpecName: "serviceca") pod "42d6fb86-e6fd-4b77-b921-d62cd5b6e825" (UID: "42d6fb86-e6fd-4b77-b921-d62cd5b6e825"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:28 crc kubenswrapper[5107]: I0126 00:13:28.395056 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42d6fb86-e6fd-4b77-b921-d62cd5b6e825-kube-api-access-26dbv" (OuterVolumeSpecName: "kube-api-access-26dbv") pod "42d6fb86-e6fd-4b77-b921-d62cd5b6e825" (UID: "42d6fb86-e6fd-4b77-b921-d62cd5b6e825"). InnerVolumeSpecName "kube-api-access-26dbv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:28 crc kubenswrapper[5107]: I0126 00:13:28.474458 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26dbv\" (UniqueName: \"kubernetes.io/projected/42d6fb86-e6fd-4b77-b921-d62cd5b6e825-kube-api-access-26dbv\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:28 crc kubenswrapper[5107]: I0126 00:13:28.474510 5107 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/42d6fb86-e6fd-4b77-b921-d62cd5b6e825-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:28 crc kubenswrapper[5107]: I0126 00:13:28.966674 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29489760-jn9bq" event={"ID":"42d6fb86-e6fd-4b77-b921-d62cd5b6e825","Type":"ContainerDied","Data":"e07cf9b5690fbada99aa3df74d4ab52a8996875d54186912bb18136ccdf8a62e"} Jan 26 00:13:28 crc kubenswrapper[5107]: I0126 00:13:28.966722 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29489760-jn9bq" Jan 26 00:13:28 crc kubenswrapper[5107]: I0126 00:13:28.966729 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e07cf9b5690fbada99aa3df74d4ab52a8996875d54186912bb18136ccdf8a62e" Jan 26 00:13:30 crc kubenswrapper[5107]: I0126 00:13:30.313503 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:13:30 crc kubenswrapper[5107]: I0126 00:13:30.723972 5107 patch_prober.go:28] interesting pod/machine-config-daemon-94c4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:13:30 crc kubenswrapper[5107]: I0126 00:13:30.724853 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:13:33 crc kubenswrapper[5107]: I0126 00:13:33.747386 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-64rgr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 00:13:33 crc kubenswrapper[5107]: I0126 00:13:33.747511 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-64rgr" podUID="4498876a-5953-499f-aa71-6899b8529dcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 00:13:35 crc kubenswrapper[5107]: I0126 00:13:35.943471 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-64rgr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 00:13:35 crc kubenswrapper[5107]: I0126 00:13:35.943572 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-64rgr" podUID="4498876a-5953-499f-aa71-6899b8529dcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 00:13:36 crc kubenswrapper[5107]: I0126 00:13:36.649212 5107 patch_prober.go:28] interesting pod/package-server-manager-77f986bd66-pwh7s container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 00:13:36 crc kubenswrapper[5107]: I0126 00:13:36.649297 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pwh7s" podUID="e39cba7d-bc11-44ab-a079-c2b873d17ef9" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.19:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 00:13:36 crc kubenswrapper[5107]: I0126 00:13:36.957432 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" Jan 26 00:13:37 crc kubenswrapper[5107]: I0126 00:13:37.130413 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b5489864b-xgrwf"] Jan 26 00:13:37 crc kubenswrapper[5107]: I0126 00:13:37.131161 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" containerName="controller-manager" containerID="cri-o://d1dd9e26c4427a4b920e2e6811bc593aaf830cd47c08de0d02b4f83ce6855981" gracePeriod=30 Jan 26 00:13:37 crc kubenswrapper[5107]: I0126 00:13:37.153043 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6"] Jan 26 00:13:37 crc kubenswrapper[5107]: I0126 00:13:37.153347 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" podUID="42760d54-52a5-4718-966c-b35aae39b112" containerName="route-controller-manager" containerID="cri-o://1c7b0e4475d26d668a3a3868c99d1105483fee0ee34e084ff2b63aef785bc1f3" gracePeriod=30 Jan 26 00:13:38 crc kubenswrapper[5107]: E0126 00:13:38.486378 5107 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Jan 26 00:13:38 crc kubenswrapper[5107]: I0126 00:13:38.487972 5107 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 00:13:38 crc kubenswrapper[5107]: I0126 00:13:38.488691 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="42d6fb86-e6fd-4b77-b921-d62cd5b6e825" containerName="image-pruner" Jan 26 00:13:38 crc kubenswrapper[5107]: I0126 00:13:38.488716 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d6fb86-e6fd-4b77-b921-d62cd5b6e825" containerName="image-pruner" Jan 26 00:13:38 crc kubenswrapper[5107]: I0126 00:13:38.488734 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b89f5a05-23c2-41e1-98b3-22ba5035191f" containerName="extract-utilities" Jan 26 00:13:38 crc kubenswrapper[5107]: I0126 00:13:38.488744 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b89f5a05-23c2-41e1-98b3-22ba5035191f" containerName="extract-utilities" Jan 26 00:13:38 crc kubenswrapper[5107]: I0126 00:13:38.488769 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b89f5a05-23c2-41e1-98b3-22ba5035191f" containerName="extract-content" Jan 26 00:13:38 crc kubenswrapper[5107]: I0126 00:13:38.488775 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b89f5a05-23c2-41e1-98b3-22ba5035191f" containerName="extract-content" Jan 26 00:13:38 crc kubenswrapper[5107]: I0126 00:13:38.488783 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b89f5a05-23c2-41e1-98b3-22ba5035191f" containerName="registry-server" Jan 26 00:13:38 crc kubenswrapper[5107]: I0126 00:13:38.488788 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b89f5a05-23c2-41e1-98b3-22ba5035191f" containerName="registry-server" Jan 26 00:13:38 crc kubenswrapper[5107]: I0126 00:13:38.488927 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="42d6fb86-e6fd-4b77-b921-d62cd5b6e825" containerName="image-pruner" Jan 26 00:13:38 crc kubenswrapper[5107]: I0126 00:13:38.488946 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="b89f5a05-23c2-41e1-98b3-22ba5035191f" containerName="registry-server" Jan 26 00:13:42 crc kubenswrapper[5107]: I0126 00:13:42.906780 5107 generic.go:358] "Generic (PLEG): container finished" podID="b083abe4-5d92-474f-bc10-63c8174bb862" containerID="d1dd9e26c4427a4b920e2e6811bc593aaf830cd47c08de0d02b4f83ce6855981" exitCode=0 Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.594217 5107 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.594559 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.595029 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://47fa690b41b05a971d8e2d25a105b0c873282b4794f352165354120564685e3b" gracePeriod=15 Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.596416 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://8771a49a10f3f3e07f25647aa9c52ba74dae813bb12b4e2d0f80e6996482bd1d" gracePeriod=15 Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.596473 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://77afb4ec3e1993d3627dfd57b2c724e127e0b709358c469f86fe32abae3a75a7" gracePeriod=15 Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.596572 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://6c1d676a79dd2425942bd62e4d423f98509d8fbdce526ec4174c8f201faab13c" gracePeriod=15 Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.596616 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://e6a9e0e1088ec6d6c55e9c40410af1e160ce01e045855d38afe83fae0f283ad1" gracePeriod=15 Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.618731 5107 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.628090 5107 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.650346 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.650424 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.650450 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.650456 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.650464 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.650471 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.650482 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.650489 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.650516 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.650524 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.650535 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.650541 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.650551 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.650559 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.650595 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.650601 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.650836 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.650853 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.650866 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.650897 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.650908 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.650921 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.651209 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.651217 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.651520 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.651539 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.698240 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.700103 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.700204 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.703009 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.705875 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.808357 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.808480 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.808511 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.808556 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.808578 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.808659 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.808700 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.808919 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.808950 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.808985 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.937227 5107 patch_prober.go:28] interesting pod/route-controller-manager-76959bf66b-7tfq6 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.937426 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" podUID="42760d54-52a5-4718-966c-b35aae39b112" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Jan 26 00:13:45 crc kubenswrapper[5107]: E0126 00:13:45.938831 5107 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/events\": dial tcp 38.102.83.203:6443: connect: connection refused" event=< Jan 26 00:13:45 crc kubenswrapper[5107]: &Event{ObjectMeta:{route-controller-manager-76959bf66b-7tfq6.188e1f946c63f4cc openshift-route-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-route-controller-manager,Name:route-controller-manager-76959bf66b-7tfq6,UID:42760d54-52a5-4718-966c-b35aae39b112,APIVersion:v1,ResourceVersion:39313,FieldPath:spec.containers{route-controller-manager},},Reason:ProbeError,Message:Readiness probe error: Get "https://10.217.0.63:8443/healthz": dial tcp 10.217.0.63:8443: connect: connection refused Jan 26 00:13:45 crc kubenswrapper[5107]: body: Jan 26 00:13:45 crc kubenswrapper[5107]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:13:45.93729454 +0000 UTC m=+270.854888926,LastTimestamp:2026-01-26 00:13:45.93729454 +0000 UTC m=+270.854888926,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 26 00:13:45 crc kubenswrapper[5107]: > Jan 26 00:13:45 crc kubenswrapper[5107]: I0126 00:13:45.948346 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.335079 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.337046 5107 status_manager.go:895] "Failed to get status for pod" podUID="42760d54-52a5-4718-966c-b35aae39b112" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-76959bf66b-7tfq6\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.378061 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.378963 5107 status_manager.go:895] "Failed to get status for pod" podUID="42760d54-52a5-4718-966c-b35aae39b112" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-76959bf66b-7tfq6\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.380204 5107 status_manager.go:895] "Failed to get status for pod" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b5489864b-xgrwf\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.419393 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngpxk\" (UniqueName: \"kubernetes.io/projected/42760d54-52a5-4718-966c-b35aae39b112-kube-api-access-ngpxk\") pod \"42760d54-52a5-4718-966c-b35aae39b112\" (UID: \"42760d54-52a5-4718-966c-b35aae39b112\") " Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.419470 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b083abe4-5d92-474f-bc10-63c8174bb862-config\") pod \"b083abe4-5d92-474f-bc10-63c8174bb862\" (UID: \"b083abe4-5d92-474f-bc10-63c8174bb862\") " Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.419506 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b083abe4-5d92-474f-bc10-63c8174bb862-serving-cert\") pod \"b083abe4-5d92-474f-bc10-63c8174bb862\" (UID: \"b083abe4-5d92-474f-bc10-63c8174bb862\") " Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.419599 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b083abe4-5d92-474f-bc10-63c8174bb862-proxy-ca-bundles\") pod \"b083abe4-5d92-474f-bc10-63c8174bb862\" (UID: \"b083abe4-5d92-474f-bc10-63c8174bb862\") " Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.419621 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hxt6\" (UniqueName: \"kubernetes.io/projected/b083abe4-5d92-474f-bc10-63c8174bb862-kube-api-access-6hxt6\") pod \"b083abe4-5d92-474f-bc10-63c8174bb862\" (UID: \"b083abe4-5d92-474f-bc10-63c8174bb862\") " Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.419642 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42760d54-52a5-4718-966c-b35aae39b112-config\") pod \"42760d54-52a5-4718-966c-b35aae39b112\" (UID: \"42760d54-52a5-4718-966c-b35aae39b112\") " Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.419700 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/42760d54-52a5-4718-966c-b35aae39b112-tmp\") pod \"42760d54-52a5-4718-966c-b35aae39b112\" (UID: \"42760d54-52a5-4718-966c-b35aae39b112\") " Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.419728 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b083abe4-5d92-474f-bc10-63c8174bb862-client-ca\") pod \"b083abe4-5d92-474f-bc10-63c8174bb862\" (UID: \"b083abe4-5d92-474f-bc10-63c8174bb862\") " Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.419769 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42760d54-52a5-4718-966c-b35aae39b112-client-ca\") pod \"42760d54-52a5-4718-966c-b35aae39b112\" (UID: \"42760d54-52a5-4718-966c-b35aae39b112\") " Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.419800 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42760d54-52a5-4718-966c-b35aae39b112-serving-cert\") pod \"42760d54-52a5-4718-966c-b35aae39b112\" (UID: \"42760d54-52a5-4718-966c-b35aae39b112\") " Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.419834 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b083abe4-5d92-474f-bc10-63c8174bb862-tmp\") pod \"b083abe4-5d92-474f-bc10-63c8174bb862\" (UID: \"b083abe4-5d92-474f-bc10-63c8174bb862\") " Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.420370 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42760d54-52a5-4718-966c-b35aae39b112-tmp" (OuterVolumeSpecName: "tmp") pod "42760d54-52a5-4718-966c-b35aae39b112" (UID: "42760d54-52a5-4718-966c-b35aae39b112"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.420566 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b083abe4-5d92-474f-bc10-63c8174bb862-tmp" (OuterVolumeSpecName: "tmp") pod "b083abe4-5d92-474f-bc10-63c8174bb862" (UID: "b083abe4-5d92-474f-bc10-63c8174bb862"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.420767 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b083abe4-5d92-474f-bc10-63c8174bb862-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b083abe4-5d92-474f-bc10-63c8174bb862" (UID: "b083abe4-5d92-474f-bc10-63c8174bb862"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.421491 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b083abe4-5d92-474f-bc10-63c8174bb862-client-ca" (OuterVolumeSpecName: "client-ca") pod "b083abe4-5d92-474f-bc10-63c8174bb862" (UID: "b083abe4-5d92-474f-bc10-63c8174bb862"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.421507 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42760d54-52a5-4718-966c-b35aae39b112-client-ca" (OuterVolumeSpecName: "client-ca") pod "42760d54-52a5-4718-966c-b35aae39b112" (UID: "42760d54-52a5-4718-966c-b35aae39b112"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.421507 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b083abe4-5d92-474f-bc10-63c8174bb862-config" (OuterVolumeSpecName: "config") pod "b083abe4-5d92-474f-bc10-63c8174bb862" (UID: "b083abe4-5d92-474f-bc10-63c8174bb862"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.421553 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42760d54-52a5-4718-966c-b35aae39b112-config" (OuterVolumeSpecName: "config") pod "42760d54-52a5-4718-966c-b35aae39b112" (UID: "42760d54-52a5-4718-966c-b35aae39b112"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.425161 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b083abe4-5d92-474f-bc10-63c8174bb862-kube-api-access-6hxt6" (OuterVolumeSpecName: "kube-api-access-6hxt6") pod "b083abe4-5d92-474f-bc10-63c8174bb862" (UID: "b083abe4-5d92-474f-bc10-63c8174bb862"). InnerVolumeSpecName "kube-api-access-6hxt6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.425259 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42760d54-52a5-4718-966c-b35aae39b112-kube-api-access-ngpxk" (OuterVolumeSpecName: "kube-api-access-ngpxk") pod "42760d54-52a5-4718-966c-b35aae39b112" (UID: "42760d54-52a5-4718-966c-b35aae39b112"). InnerVolumeSpecName "kube-api-access-ngpxk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.425534 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b083abe4-5d92-474f-bc10-63c8174bb862-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b083abe4-5d92-474f-bc10-63c8174bb862" (UID: "b083abe4-5d92-474f-bc10-63c8174bb862"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.425629 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42760d54-52a5-4718-966c-b35aae39b112-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "42760d54-52a5-4718-966c-b35aae39b112" (UID: "42760d54-52a5-4718-966c-b35aae39b112"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.522466 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ngpxk\" (UniqueName: \"kubernetes.io/projected/42760d54-52a5-4718-966c-b35aae39b112-kube-api-access-ngpxk\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.522524 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b083abe4-5d92-474f-bc10-63c8174bb862-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.522537 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b083abe4-5d92-474f-bc10-63c8174bb862-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.522552 5107 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b083abe4-5d92-474f-bc10-63c8174bb862-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.522565 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6hxt6\" (UniqueName: \"kubernetes.io/projected/b083abe4-5d92-474f-bc10-63c8174bb862-kube-api-access-6hxt6\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.522577 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42760d54-52a5-4718-966c-b35aae39b112-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.522588 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/42760d54-52a5-4718-966c-b35aae39b112-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.522600 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b083abe4-5d92-474f-bc10-63c8174bb862-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.522613 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42760d54-52a5-4718-966c-b35aae39b112-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.522628 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42760d54-52a5-4718-966c-b35aae39b112-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.522644 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b083abe4-5d92-474f-bc10-63c8174bb862-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:46 crc kubenswrapper[5107]: I0126 00:13:46.647850 5107 generic.go:358] "Generic (PLEG): container finished" podID="42760d54-52a5-4718-966c-b35aae39b112" containerID="1c7b0e4475d26d668a3a3868c99d1105483fee0ee34e084ff2b63aef785bc1f3" exitCode=0 Jan 26 00:13:47 crc kubenswrapper[5107]: I0126 00:13:47.676600 5107 generic.go:358] "Generic (PLEG): container finished" podID="9ac12125-d091-4b8b-89ba-b5b821b7a825" containerID="75fa3b2f0066620ad3ea188036e696142bb91d9e2b86acff05551ed2d996b344" exitCode=0 Jan 26 00:13:47 crc kubenswrapper[5107]: I0126 00:13:47.683616 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 26 00:13:47 crc kubenswrapper[5107]: I0126 00:13:47.688204 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 26 00:13:47 crc kubenswrapper[5107]: I0126 00:13:47.689833 5107 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="8771a49a10f3f3e07f25647aa9c52ba74dae813bb12b4e2d0f80e6996482bd1d" exitCode=0 Jan 26 00:13:47 crc kubenswrapper[5107]: I0126 00:13:47.689868 5107 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="e6a9e0e1088ec6d6c55e9c40410af1e160ce01e045855d38afe83fae0f283ad1" exitCode=2 Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.035438 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" podUID="c1eb51c7-ee2f-4230-929d-62d6608eca89" containerName="oauth-openshift" containerID="cri-o://c552ee5e0a5ac231f695f5a2a0838b3e4acd7e8bab123274e6c43d2ef07f5fef" gracePeriod=15 Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.209313 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.211622 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.212024 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-64rgr" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.212441 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.215057 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.215994 5107 status_manager.go:895] "Failed to get status for pod" podUID="42760d54-52a5-4718-966c-b35aae39b112" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-76959bf66b-7tfq6\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.216506 5107 status_manager.go:895] "Failed to get status for pod" podUID="4498876a-5953-499f-aa71-6899b8529dcf" pod="openshift-console/downloads-747b44746d-64rgr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-64rgr\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.217415 5107 status_manager.go:895] "Failed to get status for pod" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b5489864b-xgrwf\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.219109 5107 status_manager.go:895] "Failed to get status for pod" podUID="42760d54-52a5-4718-966c-b35aae39b112" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-76959bf66b-7tfq6\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.219435 5107 status_manager.go:895] "Failed to get status for pod" podUID="4498876a-5953-499f-aa71-6899b8529dcf" pod="openshift-console/downloads-747b44746d-64rgr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-64rgr\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.220365 5107 status_manager.go:895] "Failed to get status for pod" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b5489864b-xgrwf\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.220666 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.224920 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" event={"ID":"b083abe4-5d92-474f-bc10-63c8174bb862","Type":"ContainerDied","Data":"d1dd9e26c4427a4b920e2e6811bc593aaf830cd47c08de0d02b4f83ce6855981"} Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.224961 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" event={"ID":"42760d54-52a5-4718-966c-b35aae39b112","Type":"ContainerDied","Data":"1c7b0e4475d26d668a3a3868c99d1105483fee0ee34e084ff2b63aef785bc1f3"} Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.224975 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" event={"ID":"b083abe4-5d92-474f-bc10-63c8174bb862","Type":"ContainerDied","Data":"aa9fd07f4ffdf591fd10b13377ab8930e14a436e039e4beb959af11472b37bb5"} Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.224987 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" event={"ID":"42760d54-52a5-4718-966c-b35aae39b112","Type":"ContainerDied","Data":"0b621b3f8177f2cd00e8a9ea14e9cf1ff53ac76048ae7ea42dbd94ed892a3b0b"} Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.225000 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"32bbc1a4ea68ea7261e3684701be9d8b5233dd1e4acbbd62cc2750eff85f3bd1"} Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.225013 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"9ac12125-d091-4b8b-89ba-b5b821b7a825","Type":"ContainerDied","Data":"75fa3b2f0066620ad3ea188036e696142bb91d9e2b86acff05551ed2d996b344"} Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.225448 5107 scope.go:117] "RemoveContainer" containerID="d1dd9e26c4427a4b920e2e6811bc593aaf830cd47c08de0d02b4f83ce6855981" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.225559 5107 status_manager.go:895] "Failed to get status for pod" podUID="9ac12125-d091-4b8b-89ba-b5b821b7a825" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.225805 5107 status_manager.go:895] "Failed to get status for pod" podUID="42760d54-52a5-4718-966c-b35aae39b112" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-76959bf66b-7tfq6\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.226217 5107 status_manager.go:895] "Failed to get status for pod" podUID="4498876a-5953-499f-aa71-6899b8529dcf" pod="openshift-console/downloads-747b44746d-64rgr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-64rgr\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.226778 5107 status_manager.go:895] "Failed to get status for pod" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b5489864b-xgrwf\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.227085 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.237244 5107 status_manager.go:895] "Failed to get status for pod" podUID="4498876a-5953-499f-aa71-6899b8529dcf" pod="openshift-console/downloads-747b44746d-64rgr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-64rgr\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.238806 5107 status_manager.go:895] "Failed to get status for pod" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b5489864b-xgrwf\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.239630 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.240015 5107 status_manager.go:895] "Failed to get status for pod" podUID="9ac12125-d091-4b8b-89ba-b5b821b7a825" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.240372 5107 status_manager.go:895] "Failed to get status for pod" podUID="42760d54-52a5-4718-966c-b35aae39b112" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-76959bf66b-7tfq6\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.240899 5107 status_manager.go:895] "Failed to get status for pod" podUID="42760d54-52a5-4718-966c-b35aae39b112" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-76959bf66b-7tfq6\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.241568 5107 status_manager.go:895] "Failed to get status for pod" podUID="4498876a-5953-499f-aa71-6899b8529dcf" pod="openshift-console/downloads-747b44746d-64rgr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-64rgr\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.242292 5107 status_manager.go:895] "Failed to get status for pod" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b5489864b-xgrwf\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.243014 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.243357 5107 status_manager.go:895] "Failed to get status for pod" podUID="9ac12125-d091-4b8b-89ba-b5b821b7a825" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.252571 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.252652 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.252840 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.253141 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.253557 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.298145 5107 scope.go:117] "RemoveContainer" containerID="1c7b0e4475d26d668a3a3868c99d1105483fee0ee34e084ff2b63aef785bc1f3" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.355138 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.355226 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.355251 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.355286 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.355403 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.355549 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.355627 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.355574 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.356444 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.356496 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.413949 5107 scope.go:117] "RemoveContainer" containerID="d1dd9e26c4427a4b920e2e6811bc593aaf830cd47c08de0d02b4f83ce6855981" Jan 26 00:13:48 crc kubenswrapper[5107]: E0126 00:13:48.414731 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1dd9e26c4427a4b920e2e6811bc593aaf830cd47c08de0d02b4f83ce6855981\": container with ID starting with d1dd9e26c4427a4b920e2e6811bc593aaf830cd47c08de0d02b4f83ce6855981 not found: ID does not exist" containerID="d1dd9e26c4427a4b920e2e6811bc593aaf830cd47c08de0d02b4f83ce6855981" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.414837 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1dd9e26c4427a4b920e2e6811bc593aaf830cd47c08de0d02b4f83ce6855981"} err="failed to get container status \"d1dd9e26c4427a4b920e2e6811bc593aaf830cd47c08de0d02b4f83ce6855981\": rpc error: code = NotFound desc = could not find container \"d1dd9e26c4427a4b920e2e6811bc593aaf830cd47c08de0d02b4f83ce6855981\": container with ID starting with d1dd9e26c4427a4b920e2e6811bc593aaf830cd47c08de0d02b4f83ce6855981 not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.414922 5107 scope.go:117] "RemoveContainer" containerID="1c7b0e4475d26d668a3a3868c99d1105483fee0ee34e084ff2b63aef785bc1f3" Jan 26 00:13:48 crc kubenswrapper[5107]: E0126 00:13:48.415833 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c7b0e4475d26d668a3a3868c99d1105483fee0ee34e084ff2b63aef785bc1f3\": container with ID starting with 1c7b0e4475d26d668a3a3868c99d1105483fee0ee34e084ff2b63aef785bc1f3 not found: ID does not exist" containerID="1c7b0e4475d26d668a3a3868c99d1105483fee0ee34e084ff2b63aef785bc1f3" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.415904 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c7b0e4475d26d668a3a3868c99d1105483fee0ee34e084ff2b63aef785bc1f3"} err="failed to get container status \"1c7b0e4475d26d668a3a3868c99d1105483fee0ee34e084ff2b63aef785bc1f3\": rpc error: code = NotFound desc = could not find container \"1c7b0e4475d26d668a3a3868c99d1105483fee0ee34e084ff2b63aef785bc1f3\": container with ID starting with 1c7b0e4475d26d668a3a3868c99d1105483fee0ee34e084ff2b63aef785bc1f3 not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.415928 5107 scope.go:117] "RemoveContainer" containerID="cce459ad004254e8afec72b815e731aa25828326ffe317a8dd4ac064ffc744fb" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.716805 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.719416 5107 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="77afb4ec3e1993d3627dfd57b2c724e127e0b709358c469f86fe32abae3a75a7" exitCode=0 Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.719485 5107 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="6c1d676a79dd2425942bd62e4d423f98509d8fbdce526ec4174c8f201faab13c" exitCode=0 Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.980424 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.982567 5107 status_manager.go:895] "Failed to get status for pod" podUID="42760d54-52a5-4718-966c-b35aae39b112" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-76959bf66b-7tfq6\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.983069 5107 status_manager.go:895] "Failed to get status for pod" podUID="4498876a-5953-499f-aa71-6899b8529dcf" pod="openshift-console/downloads-747b44746d-64rgr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-64rgr\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.983438 5107 status_manager.go:895] "Failed to get status for pod" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b5489864b-xgrwf\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.983779 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5107]: I0126 00:13:48.984123 5107 status_manager.go:895] "Failed to get status for pod" podUID="9ac12125-d091-4b8b-89ba-b5b821b7a825" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:49 crc kubenswrapper[5107]: I0126 00:13:49.068925 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ac12125-d091-4b8b-89ba-b5b821b7a825-kubelet-dir\") pod \"9ac12125-d091-4b8b-89ba-b5b821b7a825\" (UID: \"9ac12125-d091-4b8b-89ba-b5b821b7a825\") " Jan 26 00:13:49 crc kubenswrapper[5107]: I0126 00:13:49.069137 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9ac12125-d091-4b8b-89ba-b5b821b7a825-var-lock\") pod \"9ac12125-d091-4b8b-89ba-b5b821b7a825\" (UID: \"9ac12125-d091-4b8b-89ba-b5b821b7a825\") " Jan 26 00:13:49 crc kubenswrapper[5107]: I0126 00:13:49.069208 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ac12125-d091-4b8b-89ba-b5b821b7a825-kube-api-access\") pod \"9ac12125-d091-4b8b-89ba-b5b821b7a825\" (UID: \"9ac12125-d091-4b8b-89ba-b5b821b7a825\") " Jan 26 00:13:49 crc kubenswrapper[5107]: I0126 00:13:49.069241 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ac12125-d091-4b8b-89ba-b5b821b7a825-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9ac12125-d091-4b8b-89ba-b5b821b7a825" (UID: "9ac12125-d091-4b8b-89ba-b5b821b7a825"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:13:49 crc kubenswrapper[5107]: I0126 00:13:49.069412 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ac12125-d091-4b8b-89ba-b5b821b7a825-var-lock" (OuterVolumeSpecName: "var-lock") pod "9ac12125-d091-4b8b-89ba-b5b821b7a825" (UID: "9ac12125-d091-4b8b-89ba-b5b821b7a825"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:13:49 crc kubenswrapper[5107]: I0126 00:13:49.069980 5107 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9ac12125-d091-4b8b-89ba-b5b821b7a825-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:49 crc kubenswrapper[5107]: I0126 00:13:49.070007 5107 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ac12125-d091-4b8b-89ba-b5b821b7a825-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:49 crc kubenswrapper[5107]: I0126 00:13:49.077049 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ac12125-d091-4b8b-89ba-b5b821b7a825-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9ac12125-d091-4b8b-89ba-b5b821b7a825" (UID: "9ac12125-d091-4b8b-89ba-b5b821b7a825"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:49 crc kubenswrapper[5107]: I0126 00:13:49.171763 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ac12125-d091-4b8b-89ba-b5b821b7a825-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:49 crc kubenswrapper[5107]: E0126 00:13:49.664820 5107 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/events\": dial tcp 38.102.83.203:6443: connect: connection refused" event=< Jan 26 00:13:49 crc kubenswrapper[5107]: &Event{ObjectMeta:{route-controller-manager-76959bf66b-7tfq6.188e1f946c63f4cc openshift-route-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-route-controller-manager,Name:route-controller-manager-76959bf66b-7tfq6,UID:42760d54-52a5-4718-966c-b35aae39b112,APIVersion:v1,ResourceVersion:39313,FieldPath:spec.containers{route-controller-manager},},Reason:ProbeError,Message:Readiness probe error: Get "https://10.217.0.63:8443/healthz": dial tcp 10.217.0.63:8443: connect: connection refused Jan 26 00:13:49 crc kubenswrapper[5107]: body: Jan 26 00:13:49 crc kubenswrapper[5107]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:13:45.93729454 +0000 UTC m=+270.854888926,LastTimestamp:2026-01-26 00:13:45.93729454 +0000 UTC m=+270.854888926,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 26 00:13:49 crc kubenswrapper[5107]: > Jan 26 00:13:49 crc kubenswrapper[5107]: I0126 00:13:49.728932 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"9ac12125-d091-4b8b-89ba-b5b821b7a825","Type":"ContainerDied","Data":"d567f6da3bc8a8a126853a3fb4ff9d1d6e71fecc021e39d22c9c17dbc4153f41"} Jan 26 00:13:49 crc kubenswrapper[5107]: I0126 00:13:49.729035 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d567f6da3bc8a8a126853a3fb4ff9d1d6e71fecc021e39d22c9c17dbc4153f41" Jan 26 00:13:49 crc kubenswrapper[5107]: I0126 00:13:49.728978 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:13:49 crc kubenswrapper[5107]: I0126 00:13:49.732942 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 26 00:13:49 crc kubenswrapper[5107]: I0126 00:13:49.733676 5107 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="47fa690b41b05a971d8e2d25a105b0c873282b4794f352165354120564685e3b" exitCode=0 Jan 26 00:13:49 crc kubenswrapper[5107]: I0126 00:13:49.744349 5107 status_manager.go:895] "Failed to get status for pod" podUID="42760d54-52a5-4718-966c-b35aae39b112" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-76959bf66b-7tfq6\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:49 crc kubenswrapper[5107]: I0126 00:13:49.745082 5107 status_manager.go:895] "Failed to get status for pod" podUID="4498876a-5953-499f-aa71-6899b8529dcf" pod="openshift-console/downloads-747b44746d-64rgr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-64rgr\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:49 crc kubenswrapper[5107]: I0126 00:13:49.745590 5107 status_manager.go:895] "Failed to get status for pod" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b5489864b-xgrwf\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:49 crc kubenswrapper[5107]: I0126 00:13:49.745846 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:49 crc kubenswrapper[5107]: I0126 00:13:49.746176 5107 status_manager.go:895] "Failed to get status for pod" podUID="9ac12125-d091-4b8b-89ba-b5b821b7a825" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:50 crc kubenswrapper[5107]: I0126 00:13:50.742801 5107 generic.go:358] "Generic (PLEG): container finished" podID="c1eb51c7-ee2f-4230-929d-62d6608eca89" containerID="c552ee5e0a5ac231f695f5a2a0838b3e4acd7e8bab123274e6c43d2ef07f5fef" exitCode=0 Jan 26 00:13:50 crc kubenswrapper[5107]: I0126 00:13:50.742975 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" event={"ID":"c1eb51c7-ee2f-4230-929d-62d6608eca89","Type":"ContainerDied","Data":"c552ee5e0a5ac231f695f5a2a0838b3e4acd7e8bab123274e6c43d2ef07f5fef"} Jan 26 00:13:51 crc kubenswrapper[5107]: I0126 00:13:51.754422 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 26 00:13:51 crc kubenswrapper[5107]: E0126 00:13:51.986449 5107 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:51 crc kubenswrapper[5107]: E0126 00:13:51.987281 5107 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:51 crc kubenswrapper[5107]: E0126 00:13:51.988195 5107 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:51 crc kubenswrapper[5107]: E0126 00:13:51.988583 5107 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:51 crc kubenswrapper[5107]: E0126 00:13:51.988975 5107 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:51 crc kubenswrapper[5107]: I0126 00:13:51.989022 5107 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 26 00:13:51 crc kubenswrapper[5107]: E0126 00:13:51.989467 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" interval="200ms" Jan 26 00:13:52 crc kubenswrapper[5107]: E0126 00:13:52.191037 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" interval="400ms" Jan 26 00:13:52 crc kubenswrapper[5107]: E0126 00:13:52.592646 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" interval="800ms" Jan 26 00:13:52 crc kubenswrapper[5107]: I0126 00:13:52.764331 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"853034021ff13ae75ff8cbeeab672cc8d2153238b807c7ba2f05288f5ff7798d"} Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.104932 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.106350 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.107425 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.107917 5107 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.108575 5107 status_manager.go:895] "Failed to get status for pod" podUID="9ac12125-d091-4b8b-89ba-b5b821b7a825" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.108877 5107 status_manager.go:895] "Failed to get status for pod" podUID="42760d54-52a5-4718-966c-b35aae39b112" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-76959bf66b-7tfq6\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.109238 5107 status_manager.go:895] "Failed to get status for pod" podUID="4498876a-5953-499f-aa71-6899b8529dcf" pod="openshift-console/downloads-747b44746d-64rgr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-64rgr\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.109639 5107 status_manager.go:895] "Failed to get status for pod" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b5489864b-xgrwf\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.194983 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.195053 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.195138 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.195223 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.195225 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.195288 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.195263 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.195400 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.196096 5107 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.196116 5107 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.196322 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.198961 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.260794 5107 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-wsw2x container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.260912 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" podUID="c1eb51c7-ee2f-4230-929d-62d6608eca89" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.297873 5107 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.297918 5107 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.297929 5107 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:55 crc kubenswrapper[5107]: E0126 00:13:53.394371 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" interval="1.6s" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.732395 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.733575 5107 status_manager.go:895] "Failed to get status for pod" podUID="9ac12125-d091-4b8b-89ba-b5b821b7a825" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.735458 5107 status_manager.go:895] "Failed to get status for pod" podUID="42760d54-52a5-4718-966c-b35aae39b112" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-76959bf66b-7tfq6\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.736101 5107 status_manager.go:895] "Failed to get status for pod" podUID="c1eb51c7-ee2f-4230-929d-62d6608eca89" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-wsw2x\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.736424 5107 status_manager.go:895] "Failed to get status for pod" podUID="4498876a-5953-499f-aa71-6899b8529dcf" pod="openshift-console/downloads-747b44746d-64rgr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-64rgr\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.737288 5107 status_manager.go:895] "Failed to get status for pod" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b5489864b-xgrwf\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.737709 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.737966 5107 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.776761 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.777796 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.777803 5107 scope.go:117] "RemoveContainer" containerID="8771a49a10f3f3e07f25647aa9c52ba74dae813bb12b4e2d0f80e6996482bd1d" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.781301 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" event={"ID":"c1eb51c7-ee2f-4230-929d-62d6608eca89","Type":"ContainerDied","Data":"c29715f66068f5dd64bc0ad1202d0278a8092895116c00a5fe223fbdff71310a"} Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.783553 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.784235 5107 status_manager.go:895] "Failed to get status for pod" podUID="42760d54-52a5-4718-966c-b35aae39b112" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-76959bf66b-7tfq6\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.784974 5107 status_manager.go:895] "Failed to get status for pod" podUID="c1eb51c7-ee2f-4230-929d-62d6608eca89" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-wsw2x\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.785564 5107 status_manager.go:895] "Failed to get status for pod" podUID="4498876a-5953-499f-aa71-6899b8529dcf" pod="openshift-console/downloads-747b44746d-64rgr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-64rgr\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.786007 5107 status_manager.go:895] "Failed to get status for pod" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b5489864b-xgrwf\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.786327 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.786666 5107 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.786989 5107 status_manager.go:895] "Failed to get status for pod" podUID="9ac12125-d091-4b8b-89ba-b5b821b7a825" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.787427 5107 status_manager.go:895] "Failed to get status for pod" podUID="c1eb51c7-ee2f-4230-929d-62d6608eca89" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-wsw2x\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.787723 5107 status_manager.go:895] "Failed to get status for pod" podUID="4498876a-5953-499f-aa71-6899b8529dcf" pod="openshift-console/downloads-747b44746d-64rgr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-64rgr\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.788049 5107 status_manager.go:895] "Failed to get status for pod" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b5489864b-xgrwf\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.788551 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.789021 5107 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.789314 5107 status_manager.go:895] "Failed to get status for pod" podUID="9ac12125-d091-4b8b-89ba-b5b821b7a825" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.789604 5107 status_manager.go:895] "Failed to get status for pod" podUID="42760d54-52a5-4718-966c-b35aae39b112" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-76959bf66b-7tfq6\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.794862 5107 status_manager.go:895] "Failed to get status for pod" podUID="42760d54-52a5-4718-966c-b35aae39b112" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-76959bf66b-7tfq6\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.795219 5107 status_manager.go:895] "Failed to get status for pod" podUID="c1eb51c7-ee2f-4230-929d-62d6608eca89" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-wsw2x\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.795565 5107 status_manager.go:895] "Failed to get status for pod" podUID="4498876a-5953-499f-aa71-6899b8529dcf" pod="openshift-console/downloads-747b44746d-64rgr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-64rgr\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.795854 5107 status_manager.go:895] "Failed to get status for pod" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b5489864b-xgrwf\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.800154 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.800588 5107 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.800843 5107 status_manager.go:895] "Failed to get status for pod" podUID="9ac12125-d091-4b8b-89ba-b5b821b7a825" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.800984 5107 scope.go:117] "RemoveContainer" containerID="77afb4ec3e1993d3627dfd57b2c724e127e0b709358c469f86fe32abae3a75a7" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.806142 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c1eb51c7-ee2f-4230-929d-62d6608eca89-audit-policies\") pod \"c1eb51c7-ee2f-4230-929d-62d6608eca89\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.806174 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-user-idp-0-file-data\") pod \"c1eb51c7-ee2f-4230-929d-62d6608eca89\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.806206 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-service-ca\") pod \"c1eb51c7-ee2f-4230-929d-62d6608eca89\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.806239 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-router-certs\") pod \"c1eb51c7-ee2f-4230-929d-62d6608eca89\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.806302 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-ocp-branding-template\") pod \"c1eb51c7-ee2f-4230-929d-62d6608eca89\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.806383 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-user-template-login\") pod \"c1eb51c7-ee2f-4230-929d-62d6608eca89\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.806439 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-serving-cert\") pod \"c1eb51c7-ee2f-4230-929d-62d6608eca89\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.806460 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-user-template-error\") pod \"c1eb51c7-ee2f-4230-929d-62d6608eca89\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.806491 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-trusted-ca-bundle\") pod \"c1eb51c7-ee2f-4230-929d-62d6608eca89\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.807103 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zdjl\" (UniqueName: \"kubernetes.io/projected/c1eb51c7-ee2f-4230-929d-62d6608eca89-kube-api-access-5zdjl\") pod \"c1eb51c7-ee2f-4230-929d-62d6608eca89\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.807186 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-session\") pod \"c1eb51c7-ee2f-4230-929d-62d6608eca89\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.807259 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-user-template-provider-selection\") pod \"c1eb51c7-ee2f-4230-929d-62d6608eca89\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.807284 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "c1eb51c7-ee2f-4230-929d-62d6608eca89" (UID: "c1eb51c7-ee2f-4230-929d-62d6608eca89"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.807308 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c1eb51c7-ee2f-4230-929d-62d6608eca89-audit-dir\") pod \"c1eb51c7-ee2f-4230-929d-62d6608eca89\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.807344 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-cliconfig\") pod \"c1eb51c7-ee2f-4230-929d-62d6608eca89\" (UID: \"c1eb51c7-ee2f-4230-929d-62d6608eca89\") " Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.807695 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.808323 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "c1eb51c7-ee2f-4230-929d-62d6608eca89" (UID: "c1eb51c7-ee2f-4230-929d-62d6608eca89"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.808353 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "c1eb51c7-ee2f-4230-929d-62d6608eca89" (UID: "c1eb51c7-ee2f-4230-929d-62d6608eca89"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.808393 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1eb51c7-ee2f-4230-929d-62d6608eca89-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "c1eb51c7-ee2f-4230-929d-62d6608eca89" (UID: "c1eb51c7-ee2f-4230-929d-62d6608eca89"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.808702 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1eb51c7-ee2f-4230-929d-62d6608eca89-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "c1eb51c7-ee2f-4230-929d-62d6608eca89" (UID: "c1eb51c7-ee2f-4230-929d-62d6608eca89"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.815502 5107 scope.go:117] "RemoveContainer" containerID="6c1d676a79dd2425942bd62e4d423f98509d8fbdce526ec4174c8f201faab13c" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.819290 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "c1eb51c7-ee2f-4230-929d-62d6608eca89" (UID: "c1eb51c7-ee2f-4230-929d-62d6608eca89"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.820197 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "c1eb51c7-ee2f-4230-929d-62d6608eca89" (UID: "c1eb51c7-ee2f-4230-929d-62d6608eca89"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.820547 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "c1eb51c7-ee2f-4230-929d-62d6608eca89" (UID: "c1eb51c7-ee2f-4230-929d-62d6608eca89"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.820877 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "c1eb51c7-ee2f-4230-929d-62d6608eca89" (UID: "c1eb51c7-ee2f-4230-929d-62d6608eca89"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.821036 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "c1eb51c7-ee2f-4230-929d-62d6608eca89" (UID: "c1eb51c7-ee2f-4230-929d-62d6608eca89"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.826016 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "c1eb51c7-ee2f-4230-929d-62d6608eca89" (UID: "c1eb51c7-ee2f-4230-929d-62d6608eca89"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.826154 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "c1eb51c7-ee2f-4230-929d-62d6608eca89" (UID: "c1eb51c7-ee2f-4230-929d-62d6608eca89"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.826869 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "c1eb51c7-ee2f-4230-929d-62d6608eca89" (UID: "c1eb51c7-ee2f-4230-929d-62d6608eca89"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.827366 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1eb51c7-ee2f-4230-929d-62d6608eca89-kube-api-access-5zdjl" (OuterVolumeSpecName: "kube-api-access-5zdjl") pod "c1eb51c7-ee2f-4230-929d-62d6608eca89" (UID: "c1eb51c7-ee2f-4230-929d-62d6608eca89"). InnerVolumeSpecName "kube-api-access-5zdjl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.860210 5107 scope.go:117] "RemoveContainer" containerID="e6a9e0e1088ec6d6c55e9c40410af1e160ce01e045855d38afe83fae0f283ad1" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.877768 5107 scope.go:117] "RemoveContainer" containerID="47fa690b41b05a971d8e2d25a105b0c873282b4794f352165354120564685e3b" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.898011 5107 scope.go:117] "RemoveContainer" containerID="d94b94d763fd9b6ca2afc7d80857535d8affdf06549ca617b1c6bc8bd21ec18b" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.909448 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.909471 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.909484 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.909494 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5zdjl\" (UniqueName: \"kubernetes.io/projected/c1eb51c7-ee2f-4230-929d-62d6608eca89-kube-api-access-5zdjl\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.909503 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.909515 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.909532 5107 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c1eb51c7-ee2f-4230-929d-62d6608eca89-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.909542 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.909551 5107 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c1eb51c7-ee2f-4230-929d-62d6608eca89-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.909563 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.909572 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.909581 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.909590 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c1eb51c7-ee2f-4230-929d-62d6608eca89-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:53.965969 5107 scope.go:117] "RemoveContainer" containerID="c552ee5e0a5ac231f695f5a2a0838b3e4acd7e8bab123274e6c43d2ef07f5fef" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:54.101141 5107 status_manager.go:895] "Failed to get status for pod" podUID="c1eb51c7-ee2f-4230-929d-62d6608eca89" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-wsw2x\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:54.102568 5107 status_manager.go:895] "Failed to get status for pod" podUID="4498876a-5953-499f-aa71-6899b8529dcf" pod="openshift-console/downloads-747b44746d-64rgr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-64rgr\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:54.103236 5107 status_manager.go:895] "Failed to get status for pod" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b5489864b-xgrwf\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:54.103567 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:54.103756 5107 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:54.103973 5107 status_manager.go:895] "Failed to get status for pod" podUID="9ac12125-d091-4b8b-89ba-b5b821b7a825" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:54.104181 5107 status_manager.go:895] "Failed to get status for pod" podUID="42760d54-52a5-4718-966c-b35aae39b112" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-76959bf66b-7tfq6\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:55 crc kubenswrapper[5107]: I0126 00:13:54.121634 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Jan 26 00:13:55 crc kubenswrapper[5107]: E0126 00:13:54.996078 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" interval="3.2s" Jan 26 00:13:56 crc kubenswrapper[5107]: I0126 00:13:56.118100 5107 status_manager.go:895] "Failed to get status for pod" podUID="9ac12125-d091-4b8b-89ba-b5b821b7a825" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:56 crc kubenswrapper[5107]: I0126 00:13:56.118834 5107 status_manager.go:895] "Failed to get status for pod" podUID="42760d54-52a5-4718-966c-b35aae39b112" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-76959bf66b-7tfq6\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:56 crc kubenswrapper[5107]: I0126 00:13:56.119197 5107 status_manager.go:895] "Failed to get status for pod" podUID="c1eb51c7-ee2f-4230-929d-62d6608eca89" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-wsw2x\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:56 crc kubenswrapper[5107]: I0126 00:13:56.119764 5107 status_manager.go:895] "Failed to get status for pod" podUID="4498876a-5953-499f-aa71-6899b8529dcf" pod="openshift-console/downloads-747b44746d-64rgr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-64rgr\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:56 crc kubenswrapper[5107]: I0126 00:13:56.123592 5107 status_manager.go:895] "Failed to get status for pod" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b5489864b-xgrwf\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:56 crc kubenswrapper[5107]: I0126 00:13:56.123966 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:58 crc kubenswrapper[5107]: E0126 00:13:58.196782 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" interval="6.4s" Jan 26 00:13:59 crc kubenswrapper[5107]: E0126 00:13:59.666416 5107 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/events\": dial tcp 38.102.83.203:6443: connect: connection refused" event=< Jan 26 00:13:59 crc kubenswrapper[5107]: &Event{ObjectMeta:{route-controller-manager-76959bf66b-7tfq6.188e1f946c63f4cc openshift-route-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-route-controller-manager,Name:route-controller-manager-76959bf66b-7tfq6,UID:42760d54-52a5-4718-966c-b35aae39b112,APIVersion:v1,ResourceVersion:39313,FieldPath:spec.containers{route-controller-manager},},Reason:ProbeError,Message:Readiness probe error: Get "https://10.217.0.63:8443/healthz": dial tcp 10.217.0.63:8443: connect: connection refused Jan 26 00:13:59 crc kubenswrapper[5107]: body: Jan 26 00:13:59 crc kubenswrapper[5107]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:13:45.93729454 +0000 UTC m=+270.854888926,LastTimestamp:2026-01-26 00:13:45.93729454 +0000 UTC m=+270.854888926,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 26 00:13:59 crc kubenswrapper[5107]: > Jan 26 00:13:59 crc kubenswrapper[5107]: I0126 00:13:59.832851 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:13:59 crc kubenswrapper[5107]: I0126 00:13:59.832925 5107 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="322b4a3e2a376c541682895450ed098e45acabe88d84fda4adbc15c56d32ab5b" exitCode=1 Jan 26 00:13:59 crc kubenswrapper[5107]: I0126 00:13:59.832986 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"322b4a3e2a376c541682895450ed098e45acabe88d84fda4adbc15c56d32ab5b"} Jan 26 00:13:59 crc kubenswrapper[5107]: I0126 00:13:59.834079 5107 scope.go:117] "RemoveContainer" containerID="322b4a3e2a376c541682895450ed098e45acabe88d84fda4adbc15c56d32ab5b" Jan 26 00:13:59 crc kubenswrapper[5107]: I0126 00:13:59.834374 5107 status_manager.go:895] "Failed to get status for pod" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b5489864b-xgrwf\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:59 crc kubenswrapper[5107]: I0126 00:13:59.834988 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:59 crc kubenswrapper[5107]: I0126 00:13:59.836033 5107 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:59 crc kubenswrapper[5107]: I0126 00:13:59.836224 5107 status_manager.go:895] "Failed to get status for pod" podUID="9ac12125-d091-4b8b-89ba-b5b821b7a825" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:59 crc kubenswrapper[5107]: I0126 00:13:59.836597 5107 status_manager.go:895] "Failed to get status for pod" podUID="42760d54-52a5-4718-966c-b35aae39b112" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-76959bf66b-7tfq6\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:59 crc kubenswrapper[5107]: I0126 00:13:59.837271 5107 status_manager.go:895] "Failed to get status for pod" podUID="c1eb51c7-ee2f-4230-929d-62d6608eca89" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-wsw2x\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:13:59 crc kubenswrapper[5107]: I0126 00:13:59.837515 5107 status_manager.go:895] "Failed to get status for pod" podUID="4498876a-5953-499f-aa71-6899b8529dcf" pod="openshift-console/downloads-747b44746d-64rgr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-64rgr\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:00 crc kubenswrapper[5107]: I0126 00:14:00.724511 5107 patch_prober.go:28] interesting pod/machine-config-daemon-94c4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:14:00 crc kubenswrapper[5107]: I0126 00:14:00.725235 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:14:00 crc kubenswrapper[5107]: I0126 00:14:00.725362 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" Jan 26 00:14:00 crc kubenswrapper[5107]: I0126 00:14:00.726544 5107 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c034d499a3fa7451c5b69f34167ce0e89f56510875068ff8a2d30e2dd29b5599"} pod="openshift-machine-config-operator/machine-config-daemon-94c4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:14:00 crc kubenswrapper[5107]: I0126 00:14:00.726688 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" containerID="cri-o://c034d499a3fa7451c5b69f34167ce0e89f56510875068ff8a2d30e2dd29b5599" gracePeriod=600 Jan 26 00:14:00 crc kubenswrapper[5107]: I0126 00:14:00.846901 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:14:00 crc kubenswrapper[5107]: I0126 00:14:00.847133 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"09d0327bae68208669f52d0b061428d9f9b6ef82e15d696ca77d71d24faf9e36"} Jan 26 00:14:00 crc kubenswrapper[5107]: I0126 00:14:00.848553 5107 status_manager.go:895] "Failed to get status for pod" podUID="c1eb51c7-ee2f-4230-929d-62d6608eca89" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-wsw2x\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:00 crc kubenswrapper[5107]: I0126 00:14:00.848742 5107 status_manager.go:895] "Failed to get status for pod" podUID="4498876a-5953-499f-aa71-6899b8529dcf" pod="openshift-console/downloads-747b44746d-64rgr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-64rgr\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:00 crc kubenswrapper[5107]: I0126 00:14:00.849096 5107 status_manager.go:895] "Failed to get status for pod" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b5489864b-xgrwf\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:00 crc kubenswrapper[5107]: I0126 00:14:00.849626 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:00 crc kubenswrapper[5107]: I0126 00:14:00.849855 5107 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:00 crc kubenswrapper[5107]: I0126 00:14:00.850191 5107 status_manager.go:895] "Failed to get status for pod" podUID="9ac12125-d091-4b8b-89ba-b5b821b7a825" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:00 crc kubenswrapper[5107]: I0126 00:14:00.850402 5107 status_manager.go:895] "Failed to get status for pod" podUID="42760d54-52a5-4718-966c-b35aae39b112" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-76959bf66b-7tfq6\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:01 crc kubenswrapper[5107]: I0126 00:14:01.112986 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:14:01 crc kubenswrapper[5107]: I0126 00:14:01.114032 5107 status_manager.go:895] "Failed to get status for pod" podUID="c1eb51c7-ee2f-4230-929d-62d6608eca89" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-wsw2x\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:01 crc kubenswrapper[5107]: I0126 00:14:01.114635 5107 status_manager.go:895] "Failed to get status for pod" podUID="4498876a-5953-499f-aa71-6899b8529dcf" pod="openshift-console/downloads-747b44746d-64rgr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-64rgr\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:01 crc kubenswrapper[5107]: I0126 00:14:01.114933 5107 status_manager.go:895] "Failed to get status for pod" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b5489864b-xgrwf\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:01 crc kubenswrapper[5107]: I0126 00:14:01.115285 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:01 crc kubenswrapper[5107]: I0126 00:14:01.115629 5107 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:01 crc kubenswrapper[5107]: I0126 00:14:01.115954 5107 status_manager.go:895] "Failed to get status for pod" podUID="9ac12125-d091-4b8b-89ba-b5b821b7a825" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:01 crc kubenswrapper[5107]: I0126 00:14:01.116409 5107 status_manager.go:895] "Failed to get status for pod" podUID="42760d54-52a5-4718-966c-b35aae39b112" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-76959bf66b-7tfq6\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:01 crc kubenswrapper[5107]: I0126 00:14:01.128321 5107 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="504c44df-fe93-44f1-bab1-0ea8b1eb3980" Jan 26 00:14:01 crc kubenswrapper[5107]: I0126 00:14:01.128360 5107 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="504c44df-fe93-44f1-bab1-0ea8b1eb3980" Jan 26 00:14:01 crc kubenswrapper[5107]: E0126 00:14:01.128960 5107 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:14:01 crc kubenswrapper[5107]: I0126 00:14:01.129329 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:14:01 crc kubenswrapper[5107]: W0126 00:14:01.154063 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-288f6c88e054059ee08d22a0e738ad48c45ec89cfd89d5024b52d95ec6507f36 WatchSource:0}: Error finding container 288f6c88e054059ee08d22a0e738ad48c45ec89cfd89d5024b52d95ec6507f36: Status 404 returned error can't find the container with id 288f6c88e054059ee08d22a0e738ad48c45ec89cfd89d5024b52d95ec6507f36 Jan 26 00:14:01 crc kubenswrapper[5107]: I0126 00:14:01.871793 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"288f6c88e054059ee08d22a0e738ad48c45ec89cfd89d5024b52d95ec6507f36"} Jan 26 00:14:02 crc kubenswrapper[5107]: I0126 00:14:02.365792 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:14:02 crc kubenswrapper[5107]: I0126 00:14:02.366219 5107 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 00:14:02 crc kubenswrapper[5107]: I0126 00:14:02.366320 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 00:14:04 crc kubenswrapper[5107]: E0126 00:14:04.598878 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" interval="7s" Jan 26 00:14:06 crc kubenswrapper[5107]: I0126 00:14:06.121984 5107 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:06 crc kubenswrapper[5107]: I0126 00:14:06.122686 5107 status_manager.go:895] "Failed to get status for pod" podUID="9ac12125-d091-4b8b-89ba-b5b821b7a825" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:06 crc kubenswrapper[5107]: I0126 00:14:06.122958 5107 status_manager.go:895] "Failed to get status for pod" podUID="42760d54-52a5-4718-966c-b35aae39b112" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-76959bf66b-7tfq6\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:06 crc kubenswrapper[5107]: I0126 00:14:06.123648 5107 status_manager.go:895] "Failed to get status for pod" podUID="c1eb51c7-ee2f-4230-929d-62d6608eca89" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-wsw2x\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:06 crc kubenswrapper[5107]: I0126 00:14:06.124274 5107 status_manager.go:895] "Failed to get status for pod" podUID="4498876a-5953-499f-aa71-6899b8529dcf" pod="openshift-console/downloads-747b44746d-64rgr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-64rgr\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:06 crc kubenswrapper[5107]: I0126 00:14:06.124637 5107 status_manager.go:895] "Failed to get status for pod" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b5489864b-xgrwf\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:06 crc kubenswrapper[5107]: I0126 00:14:06.124846 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:06 crc kubenswrapper[5107]: I0126 00:14:06.125153 5107 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:06 crc kubenswrapper[5107]: I0126 00:14:06.873180 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:14:09 crc kubenswrapper[5107]: E0126 00:14:09.667564 5107 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/events\": dial tcp 38.102.83.203:6443: connect: connection refused" event=< Jan 26 00:14:09 crc kubenswrapper[5107]: &Event{ObjectMeta:{route-controller-manager-76959bf66b-7tfq6.188e1f946c63f4cc openshift-route-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-route-controller-manager,Name:route-controller-manager-76959bf66b-7tfq6,UID:42760d54-52a5-4718-966c-b35aae39b112,APIVersion:v1,ResourceVersion:39313,FieldPath:spec.containers{route-controller-manager},},Reason:ProbeError,Message:Readiness probe error: Get "https://10.217.0.63:8443/healthz": dial tcp 10.217.0.63:8443: connect: connection refused Jan 26 00:14:09 crc kubenswrapper[5107]: body: Jan 26 00:14:09 crc kubenswrapper[5107]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:13:45.93729454 +0000 UTC m=+270.854888926,LastTimestamp:2026-01-26 00:13:45.93729454 +0000 UTC m=+270.854888926,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 26 00:14:09 crc kubenswrapper[5107]: > Jan 26 00:14:10 crc kubenswrapper[5107]: I0126 00:14:10.923792 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-daemon-94c4c_7d907601-1852-43f9-8a70-ef4e71351e81/machine-config-daemon/0.log" Jan 26 00:14:10 crc kubenswrapper[5107]: I0126 00:14:10.924011 5107 generic.go:358] "Generic (PLEG): container finished" podID="7d907601-1852-43f9-8a70-ef4e71351e81" containerID="c034d499a3fa7451c5b69f34167ce0e89f56510875068ff8a2d30e2dd29b5599" exitCode=-1 Jan 26 00:14:10 crc kubenswrapper[5107]: I0126 00:14:10.924065 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" event={"ID":"7d907601-1852-43f9-8a70-ef4e71351e81","Type":"ContainerDied","Data":"c034d499a3fa7451c5b69f34167ce0e89f56510875068ff8a2d30e2dd29b5599"} Jan 26 00:14:11 crc kubenswrapper[5107]: E0126 00:14:11.601304 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" interval="7s" Jan 26 00:14:12 crc kubenswrapper[5107]: I0126 00:14:12.368062 5107 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 00:14:12 crc kubenswrapper[5107]: I0126 00:14:12.369560 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 00:14:16 crc kubenswrapper[5107]: I0126 00:14:16.116763 5107 status_manager.go:895] "Failed to get status for pod" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b5489864b-xgrwf\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:16 crc kubenswrapper[5107]: I0126 00:14:16.117729 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:16 crc kubenswrapper[5107]: I0126 00:14:16.118327 5107 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:16 crc kubenswrapper[5107]: I0126 00:14:16.119406 5107 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:16 crc kubenswrapper[5107]: I0126 00:14:16.119620 5107 status_manager.go:895] "Failed to get status for pod" podUID="9ac12125-d091-4b8b-89ba-b5b821b7a825" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:16 crc kubenswrapper[5107]: I0126 00:14:16.120006 5107 status_manager.go:895] "Failed to get status for pod" podUID="42760d54-52a5-4718-966c-b35aae39b112" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-76959bf66b-7tfq6\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:16 crc kubenswrapper[5107]: I0126 00:14:16.120753 5107 status_manager.go:895] "Failed to get status for pod" podUID="c1eb51c7-ee2f-4230-929d-62d6608eca89" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-wsw2x\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:16 crc kubenswrapper[5107]: I0126 00:14:16.121274 5107 status_manager.go:895] "Failed to get status for pod" podUID="4498876a-5953-499f-aa71-6899b8529dcf" pod="openshift-console/downloads-747b44746d-64rgr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-64rgr\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:16 crc kubenswrapper[5107]: I0126 00:14:16.715545 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:14:16 crc kubenswrapper[5107]: I0126 00:14:16.733939 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:14:17 crc kubenswrapper[5107]: I0126 00:14:17.005215 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" event={"ID":"7d907601-1852-43f9-8a70-ef4e71351e81","Type":"ContainerStarted","Data":"e8533d9d343a82eee105ed10898832c472e05f5b38002db52b15945774cae6a3"} Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.014690 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.015145 5107 generic.go:358] "Generic (PLEG): container finished" podID="fc4541ce-7789-4670-bc75-5c2868e52ce0" containerID="d886c7e3f5792dfeb1a00971e1427c66f79512445c672d4c87f89153b14984a0" exitCode=1 Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.015257 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerDied","Data":"d886c7e3f5792dfeb1a00971e1427c66f79512445c672d4c87f89153b14984a0"} Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.016065 5107 scope.go:117] "RemoveContainer" containerID="d886c7e3f5792dfeb1a00971e1427c66f79512445c672d4c87f89153b14984a0" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.016530 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.017007 5107 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.017678 5107 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.017695 5107 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.017988 5107 status_manager.go:895] "Failed to get status for pod" podUID="9ac12125-d091-4b8b-89ba-b5b821b7a825" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.018200 5107 status_manager.go:895] "Failed to get status for pod" podUID="42760d54-52a5-4718-966c-b35aae39b112" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-76959bf66b-7tfq6\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.018410 5107 status_manager.go:895] "Failed to get status for pod" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-dgvkt\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.018640 5107 status_manager.go:895] "Failed to get status for pod" podUID="c1eb51c7-ee2f-4230-929d-62d6608eca89" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-wsw2x\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.018933 5107 status_manager.go:895] "Failed to get status for pod" podUID="4498876a-5953-499f-aa71-6899b8529dcf" pod="openshift-console/downloads-747b44746d-64rgr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-64rgr\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.019032 5107 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="f9eaf74fb54a260a47e74a8722b69b176cbc34f08f17455931b82544d77acc59" exitCode=0 Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.019115 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"f9eaf74fb54a260a47e74a8722b69b176cbc34f08f17455931b82544d77acc59"} Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.019206 5107 status_manager.go:895] "Failed to get status for pod" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b5489864b-xgrwf\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.019349 5107 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="504c44df-fe93-44f1-bab1-0ea8b1eb3980" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.019371 5107 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="504c44df-fe93-44f1-bab1-0ea8b1eb3980" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.019566 5107 status_manager.go:895] "Failed to get status for pod" podUID="42760d54-52a5-4718-966c-b35aae39b112" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-76959bf66b-7tfq6\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:18 crc kubenswrapper[5107]: E0126 00:14:18.019626 5107 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.019812 5107 status_manager.go:895] "Failed to get status for pod" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-dgvkt\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.020087 5107 status_manager.go:895] "Failed to get status for pod" podUID="c1eb51c7-ee2f-4230-929d-62d6608eca89" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-wsw2x\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.020544 5107 status_manager.go:895] "Failed to get status for pod" podUID="4498876a-5953-499f-aa71-6899b8529dcf" pod="openshift-console/downloads-747b44746d-64rgr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-64rgr\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.020749 5107 status_manager.go:895] "Failed to get status for pod" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b5489864b-xgrwf\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.021001 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.021358 5107 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.021764 5107 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.022024 5107 status_manager.go:895] "Failed to get status for pod" podUID="9ac12125-d091-4b8b-89ba-b5b821b7a825" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.022324 5107 status_manager.go:895] "Failed to get status for pod" podUID="42760d54-52a5-4718-966c-b35aae39b112" pod="openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-76959bf66b-7tfq6\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.022557 5107 status_manager.go:895] "Failed to get status for pod" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-dgvkt\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.022784 5107 status_manager.go:895] "Failed to get status for pod" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-94c4c\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.023008 5107 status_manager.go:895] "Failed to get status for pod" podUID="c1eb51c7-ee2f-4230-929d-62d6608eca89" pod="openshift-authentication/oauth-openshift-66458b6674-wsw2x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-wsw2x\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.023223 5107 status_manager.go:895] "Failed to get status for pod" podUID="4498876a-5953-499f-aa71-6899b8529dcf" pod="openshift-console/downloads-747b44746d-64rgr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-64rgr\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.023454 5107 status_manager.go:895] "Failed to get status for pod" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" pod="openshift-controller-manager/controller-manager-b5489864b-xgrwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-b5489864b-xgrwf\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.023690 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.023916 5107 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.024154 5107 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:18 crc kubenswrapper[5107]: I0126 00:14:18.024385 5107 status_manager.go:895] "Failed to get status for pod" podUID="9ac12125-d091-4b8b-89ba-b5b821b7a825" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.203:6443: connect: connection refused" Jan 26 00:14:18 crc kubenswrapper[5107]: E0126 00:14:18.603590 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.203:6443: connect: connection refused" interval="7s" Jan 26 00:14:19 crc kubenswrapper[5107]: I0126 00:14:19.031280 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Jan 26 00:14:19 crc kubenswrapper[5107]: I0126 00:14:19.033379 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"46876b53bd3f75d2ab6cf27aa82e76afa431edda95d1636e43e5626ec5607bde"} Jan 26 00:14:20 crc kubenswrapper[5107]: I0126 00:14:20.045324 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"8d30c3e99f5ec9c126957ef61f2500f087e5d720cec578b239398c7d9948033a"} Jan 26 00:14:21 crc kubenswrapper[5107]: I0126 00:14:21.056793 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"a9c41176c8e05f51517109e0141862bf171978b02beb864db90871ecd1c18835"} Jan 26 00:14:22 crc kubenswrapper[5107]: I0126 00:14:22.087083 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"f9578879cf8c4a1b676ee95acdd22339eb36aa863880b1807b324b0904d409bc"} Jan 26 00:14:22 crc kubenswrapper[5107]: I0126 00:14:22.089839 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"6c5389f8f2dd6661f185b7f18fe778c75dde75784944fba9db91e4c184a65b96"} Jan 26 00:14:22 crc kubenswrapper[5107]: I0126 00:14:22.365987 5107 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 00:14:22 crc kubenswrapper[5107]: I0126 00:14:22.366129 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 00:14:22 crc kubenswrapper[5107]: I0126 00:14:22.366206 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:14:22 crc kubenswrapper[5107]: I0126 00:14:22.367246 5107 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"09d0327bae68208669f52d0b061428d9f9b6ef82e15d696ca77d71d24faf9e36"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 26 00:14:22 crc kubenswrapper[5107]: I0126 00:14:22.367367 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" containerID="cri-o://09d0327bae68208669f52d0b061428d9f9b6ef82e15d696ca77d71d24faf9e36" gracePeriod=30 Jan 26 00:14:23 crc kubenswrapper[5107]: I0126 00:14:23.097664 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"e4eac776cdcbd89fefec63399b6ee1c5bde723a59c9b7aeb411335f863aef318"} Jan 26 00:14:23 crc kubenswrapper[5107]: I0126 00:14:23.099120 5107 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="504c44df-fe93-44f1-bab1-0ea8b1eb3980" Jan 26 00:14:23 crc kubenswrapper[5107]: I0126 00:14:23.099273 5107 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="504c44df-fe93-44f1-bab1-0ea8b1eb3980" Jan 26 00:14:23 crc kubenswrapper[5107]: I0126 00:14:23.099757 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:14:23 crc kubenswrapper[5107]: I0126 00:14:23.108870 5107 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:14:23 crc kubenswrapper[5107]: I0126 00:14:23.109220 5107 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:14:24 crc kubenswrapper[5107]: I0126 00:14:24.106553 5107 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="504c44df-fe93-44f1-bab1-0ea8b1eb3980" Jan 26 00:14:24 crc kubenswrapper[5107]: I0126 00:14:24.106597 5107 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="504c44df-fe93-44f1-bab1-0ea8b1eb3980" Jan 26 00:14:25 crc kubenswrapper[5107]: I0126 00:14:25.852440 5107 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="e13e84f7-4c5c-4fe4-bc05-7e958b704873" Jan 26 00:14:50 crc kubenswrapper[5107]: I0126 00:14:50.099389 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 26 00:14:50 crc kubenswrapper[5107]: I0126 00:14:50.697642 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 26 00:14:50 crc kubenswrapper[5107]: I0126 00:14:50.808416 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 26 00:14:51 crc kubenswrapper[5107]: I0126 00:14:51.281255 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 26 00:14:51 crc kubenswrapper[5107]: I0126 00:14:51.314419 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 26 00:14:51 crc kubenswrapper[5107]: I0126 00:14:51.706627 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 26 00:14:51 crc kubenswrapper[5107]: I0126 00:14:51.950638 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 26 00:14:52 crc kubenswrapper[5107]: I0126 00:14:52.489708 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 26 00:14:52 crc kubenswrapper[5107]: I0126 00:14:52.508396 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 26 00:14:52 crc kubenswrapper[5107]: I0126 00:14:52.583513 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 26 00:14:52 crc kubenswrapper[5107]: I0126 00:14:52.964972 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 26 00:14:53 crc kubenswrapper[5107]: I0126 00:14:53.116748 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 26 00:14:53 crc kubenswrapper[5107]: I0126 00:14:53.173634 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 26 00:14:53 crc kubenswrapper[5107]: I0126 00:14:53.253116 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 26 00:14:53 crc kubenswrapper[5107]: I0126 00:14:53.405619 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 26 00:14:53 crc kubenswrapper[5107]: I0126 00:14:53.407931 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:14:53 crc kubenswrapper[5107]: I0126 00:14:53.408020 5107 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="09d0327bae68208669f52d0b061428d9f9b6ef82e15d696ca77d71d24faf9e36" exitCode=137 Jan 26 00:14:53 crc kubenswrapper[5107]: I0126 00:14:53.408126 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"09d0327bae68208669f52d0b061428d9f9b6ef82e15d696ca77d71d24faf9e36"} Jan 26 00:14:53 crc kubenswrapper[5107]: I0126 00:14:53.408344 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"ad9ce4a39534a456066c2220364cd0cb3acb919266b8e330b480d2c726781574"} Jan 26 00:14:53 crc kubenswrapper[5107]: I0126 00:14:53.408377 5107 scope.go:117] "RemoveContainer" containerID="322b4a3e2a376c541682895450ed098e45acabe88d84fda4adbc15c56d32ab5b" Jan 26 00:14:53 crc kubenswrapper[5107]: I0126 00:14:53.446407 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 26 00:14:53 crc kubenswrapper[5107]: I0126 00:14:53.492991 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 26 00:14:53 crc kubenswrapper[5107]: I0126 00:14:53.725977 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 26 00:14:53 crc kubenswrapper[5107]: I0126 00:14:53.862743 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 26 00:14:53 crc kubenswrapper[5107]: I0126 00:14:53.881518 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 26 00:14:53 crc kubenswrapper[5107]: I0126 00:14:53.977721 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 26 00:14:54 crc kubenswrapper[5107]: I0126 00:14:54.039213 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 26 00:14:54 crc kubenswrapper[5107]: I0126 00:14:54.292134 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 26 00:14:54 crc kubenswrapper[5107]: I0126 00:14:54.419535 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 26 00:14:54 crc kubenswrapper[5107]: I0126 00:14:54.451596 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 26 00:14:54 crc kubenswrapper[5107]: I0126 00:14:54.494734 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 26 00:14:54 crc kubenswrapper[5107]: I0126 00:14:54.496399 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 26 00:14:54 crc kubenswrapper[5107]: I0126 00:14:54.843577 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 26 00:14:54 crc kubenswrapper[5107]: I0126 00:14:54.894523 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 26 00:14:55 crc kubenswrapper[5107]: I0126 00:14:55.725601 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 26 00:14:55 crc kubenswrapper[5107]: I0126 00:14:55.735876 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 26 00:14:55 crc kubenswrapper[5107]: I0126 00:14:55.962930 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 26 00:14:56 crc kubenswrapper[5107]: I0126 00:14:56.255356 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:14:56 crc kubenswrapper[5107]: I0126 00:14:56.257263 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 26 00:14:56 crc kubenswrapper[5107]: I0126 00:14:56.258536 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 26 00:14:56 crc kubenswrapper[5107]: I0126 00:14:56.366538 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 26 00:14:56 crc kubenswrapper[5107]: I0126 00:14:56.436161 5107 generic.go:358] "Generic (PLEG): container finished" podID="d93df320-4284-49f0-b63d-ba8a86943f2e" containerID="fb056342c376f0f8e441027f13f024c742b5377e6f69864030fadb560fb90a89" exitCode=0 Jan 26 00:14:56 crc kubenswrapper[5107]: I0126 00:14:56.436234 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" event={"ID":"d93df320-4284-49f0-b63d-ba8a86943f2e","Type":"ContainerDied","Data":"fb056342c376f0f8e441027f13f024c742b5377e6f69864030fadb560fb90a89"} Jan 26 00:14:56 crc kubenswrapper[5107]: I0126 00:14:56.436869 5107 scope.go:117] "RemoveContainer" containerID="fb056342c376f0f8e441027f13f024c742b5377e6f69864030fadb560fb90a89" Jan 26 00:14:56 crc kubenswrapper[5107]: I0126 00:14:56.674458 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 26 00:14:56 crc kubenswrapper[5107]: I0126 00:14:56.681950 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 26 00:14:56 crc kubenswrapper[5107]: I0126 00:14:56.779845 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 26 00:14:56 crc kubenswrapper[5107]: I0126 00:14:56.873037 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:14:56 crc kubenswrapper[5107]: I0126 00:14:56.960714 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 26 00:14:57 crc kubenswrapper[5107]: I0126 00:14:57.021647 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 26 00:14:57 crc kubenswrapper[5107]: I0126 00:14:57.175379 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 26 00:14:57 crc kubenswrapper[5107]: I0126 00:14:57.363693 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 26 00:14:57 crc kubenswrapper[5107]: I0126 00:14:57.421914 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 26 00:14:57 crc kubenswrapper[5107]: I0126 00:14:57.444275 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-59jn5_d93df320-4284-49f0-b63d-ba8a86943f2e/marketplace-operator/1.log" Jan 26 00:14:57 crc kubenswrapper[5107]: I0126 00:14:57.445751 5107 generic.go:358] "Generic (PLEG): container finished" podID="d93df320-4284-49f0-b63d-ba8a86943f2e" containerID="e38dfb80bc684842805a934db57ac36d88492a6032b81a4bf3c4665a02c5918a" exitCode=1 Jan 26 00:14:57 crc kubenswrapper[5107]: I0126 00:14:57.445798 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" event={"ID":"d93df320-4284-49f0-b63d-ba8a86943f2e","Type":"ContainerDied","Data":"e38dfb80bc684842805a934db57ac36d88492a6032b81a4bf3c4665a02c5918a"} Jan 26 00:14:57 crc kubenswrapper[5107]: I0126 00:14:57.445862 5107 scope.go:117] "RemoveContainer" containerID="fb056342c376f0f8e441027f13f024c742b5377e6f69864030fadb560fb90a89" Jan 26 00:14:57 crc kubenswrapper[5107]: I0126 00:14:57.446352 5107 scope.go:117] "RemoveContainer" containerID="e38dfb80bc684842805a934db57ac36d88492a6032b81a4bf3c4665a02c5918a" Jan 26 00:14:57 crc kubenswrapper[5107]: E0126 00:14:57.446611 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-547dbd544d-59jn5_openshift-marketplace(d93df320-4284-49f0-b63d-ba8a86943f2e)\"" pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" podUID="d93df320-4284-49f0-b63d-ba8a86943f2e" Jan 26 00:14:57 crc kubenswrapper[5107]: I0126 00:14:57.574655 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:14:57 crc kubenswrapper[5107]: I0126 00:14:57.637306 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 26 00:14:57 crc kubenswrapper[5107]: I0126 00:14:57.869811 5107 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:14:57 crc kubenswrapper[5107]: I0126 00:14:57.886965 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 26 00:14:58 crc kubenswrapper[5107]: I0126 00:14:58.076350 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 26 00:14:58 crc kubenswrapper[5107]: I0126 00:14:58.466734 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-59jn5_d93df320-4284-49f0-b63d-ba8a86943f2e/marketplace-operator/1.log" Jan 26 00:14:58 crc kubenswrapper[5107]: I0126 00:14:58.685213 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 26 00:14:58 crc kubenswrapper[5107]: I0126 00:14:58.776513 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 26 00:14:59 crc kubenswrapper[5107]: I0126 00:14:59.105146 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 26 00:14:59 crc kubenswrapper[5107]: I0126 00:14:59.132819 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 26 00:14:59 crc kubenswrapper[5107]: I0126 00:14:59.176630 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 26 00:14:59 crc kubenswrapper[5107]: I0126 00:14:59.296562 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 26 00:14:59 crc kubenswrapper[5107]: I0126 00:14:59.322252 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 26 00:14:59 crc kubenswrapper[5107]: I0126 00:14:59.355688 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 26 00:14:59 crc kubenswrapper[5107]: I0126 00:14:59.455032 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 26 00:14:59 crc kubenswrapper[5107]: I0126 00:14:59.456547 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 26 00:14:59 crc kubenswrapper[5107]: I0126 00:14:59.502537 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 26 00:14:59 crc kubenswrapper[5107]: I0126 00:14:59.591799 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 26 00:14:59 crc kubenswrapper[5107]: I0126 00:14:59.631370 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 26 00:14:59 crc kubenswrapper[5107]: I0126 00:14:59.771747 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 26 00:14:59 crc kubenswrapper[5107]: I0126 00:14:59.827832 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 26 00:15:00 crc kubenswrapper[5107]: I0126 00:15:00.036364 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 26 00:15:00 crc kubenswrapper[5107]: I0126 00:15:00.069342 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 26 00:15:00 crc kubenswrapper[5107]: I0126 00:15:00.351441 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 26 00:15:00 crc kubenswrapper[5107]: I0126 00:15:00.518670 5107 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:15:00 crc kubenswrapper[5107]: I0126 00:15:00.714112 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 26 00:15:00 crc kubenswrapper[5107]: I0126 00:15:00.813358 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 26 00:15:01 crc kubenswrapper[5107]: I0126 00:15:01.039851 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 26 00:15:01 crc kubenswrapper[5107]: I0126 00:15:01.122738 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 26 00:15:01 crc kubenswrapper[5107]: I0126 00:15:01.330355 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 26 00:15:01 crc kubenswrapper[5107]: I0126 00:15:01.578299 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 26 00:15:01 crc kubenswrapper[5107]: I0126 00:15:01.752127 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 26 00:15:01 crc kubenswrapper[5107]: I0126 00:15:01.752785 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 26 00:15:01 crc kubenswrapper[5107]: I0126 00:15:01.915221 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 26 00:15:01 crc kubenswrapper[5107]: I0126 00:15:01.978702 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 26 00:15:02 crc kubenswrapper[5107]: I0126 00:15:02.074661 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" Jan 26 00:15:02 crc kubenswrapper[5107]: I0126 00:15:02.075659 5107 scope.go:117] "RemoveContainer" containerID="e38dfb80bc684842805a934db57ac36d88492a6032b81a4bf3c4665a02c5918a" Jan 26 00:15:02 crc kubenswrapper[5107]: E0126 00:15:02.076143 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-547dbd544d-59jn5_openshift-marketplace(d93df320-4284-49f0-b63d-ba8a86943f2e)\"" pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" podUID="d93df320-4284-49f0-b63d-ba8a86943f2e" Jan 26 00:15:02 crc kubenswrapper[5107]: I0126 00:15:02.366517 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:15:02 crc kubenswrapper[5107]: I0126 00:15:02.372918 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:15:02 crc kubenswrapper[5107]: I0126 00:15:02.486681 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 26 00:15:02 crc kubenswrapper[5107]: I0126 00:15:02.492335 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:15:02 crc kubenswrapper[5107]: I0126 00:15:02.793638 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:15:02 crc kubenswrapper[5107]: I0126 00:15:02.793727 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 26 00:15:03 crc kubenswrapper[5107]: I0126 00:15:03.142644 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 26 00:15:03 crc kubenswrapper[5107]: I0126 00:15:03.142734 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 26 00:15:03 crc kubenswrapper[5107]: I0126 00:15:03.334970 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 26 00:15:03 crc kubenswrapper[5107]: I0126 00:15:03.351020 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 26 00:15:03 crc kubenswrapper[5107]: I0126 00:15:03.429932 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 26 00:15:03 crc kubenswrapper[5107]: I0126 00:15:03.546332 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 26 00:15:03 crc kubenswrapper[5107]: I0126 00:15:03.692256 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 26 00:15:03 crc kubenswrapper[5107]: I0126 00:15:03.953738 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 26 00:15:04 crc kubenswrapper[5107]: I0126 00:15:04.046285 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 26 00:15:04 crc kubenswrapper[5107]: I0126 00:15:04.277311 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 26 00:15:04 crc kubenswrapper[5107]: I0126 00:15:04.383081 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 26 00:15:04 crc kubenswrapper[5107]: I0126 00:15:04.464235 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 26 00:15:04 crc kubenswrapper[5107]: I0126 00:15:04.466761 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 26 00:15:04 crc kubenswrapper[5107]: I0126 00:15:04.472474 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" Jan 26 00:15:04 crc kubenswrapper[5107]: I0126 00:15:04.473518 5107 scope.go:117] "RemoveContainer" containerID="e38dfb80bc684842805a934db57ac36d88492a6032b81a4bf3c4665a02c5918a" Jan 26 00:15:04 crc kubenswrapper[5107]: E0126 00:15:04.474080 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-547dbd544d-59jn5_openshift-marketplace(d93df320-4284-49f0-b63d-ba8a86943f2e)\"" pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" podUID="d93df320-4284-49f0-b63d-ba8a86943f2e" Jan 26 00:15:04 crc kubenswrapper[5107]: I0126 00:15:04.480906 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 26 00:15:04 crc kubenswrapper[5107]: I0126 00:15:04.816220 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 26 00:15:04 crc kubenswrapper[5107]: I0126 00:15:04.833658 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 26 00:15:05 crc kubenswrapper[5107]: I0126 00:15:05.090590 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 26 00:15:05 crc kubenswrapper[5107]: I0126 00:15:05.120361 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 26 00:15:05 crc kubenswrapper[5107]: I0126 00:15:05.314942 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 26 00:15:05 crc kubenswrapper[5107]: I0126 00:15:05.365185 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 26 00:15:05 crc kubenswrapper[5107]: I0126 00:15:05.635125 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 26 00:15:05 crc kubenswrapper[5107]: I0126 00:15:05.674679 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 26 00:15:05 crc kubenswrapper[5107]: I0126 00:15:05.862417 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 26 00:15:05 crc kubenswrapper[5107]: I0126 00:15:05.884745 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 26 00:15:05 crc kubenswrapper[5107]: I0126 00:15:05.962428 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 26 00:15:06 crc kubenswrapper[5107]: I0126 00:15:06.191719 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 26 00:15:06 crc kubenswrapper[5107]: I0126 00:15:06.202325 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 26 00:15:06 crc kubenswrapper[5107]: I0126 00:15:06.298573 5107 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 26 00:15:06 crc kubenswrapper[5107]: I0126 00:15:06.300483 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=81.300470564 podStartE2EDuration="1m21.300470564s" podCreationTimestamp="2026-01-26 00:13:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:14:25.848637987 +0000 UTC m=+310.766232343" watchObservedRunningTime="2026-01-26 00:15:06.300470564 +0000 UTC m=+351.218064910" Jan 26 00:15:06 crc kubenswrapper[5107]: I0126 00:15:06.304431 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-b5489864b-xgrwf","openshift-authentication/oauth-openshift-66458b6674-wsw2x","openshift-kube-apiserver/kube-apiserver-crc","openshift-route-controller-manager/route-controller-manager-76959bf66b-7tfq6"] Jan 26 00:15:06 crc kubenswrapper[5107]: I0126 00:15:06.304493 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 00:15:06 crc kubenswrapper[5107]: I0126 00:15:06.309292 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:15:06 crc kubenswrapper[5107]: I0126 00:15:06.329249 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=43.329232445 podStartE2EDuration="43.329232445s" podCreationTimestamp="2026-01-26 00:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:15:06.327318998 +0000 UTC m=+351.244913354" watchObservedRunningTime="2026-01-26 00:15:06.329232445 +0000 UTC m=+351.246826791" Jan 26 00:15:06 crc kubenswrapper[5107]: I0126 00:15:06.403932 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 26 00:15:06 crc kubenswrapper[5107]: I0126 00:15:06.496766 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 26 00:15:06 crc kubenswrapper[5107]: I0126 00:15:06.515951 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:15:06 crc kubenswrapper[5107]: I0126 00:15:06.720228 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.098439 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.179174 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.532111 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.644966 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct"] Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.645619 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" containerName="controller-manager" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.645638 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" containerName="controller-manager" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.645651 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9ac12125-d091-4b8b-89ba-b5b821b7a825" containerName="installer" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.645659 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ac12125-d091-4b8b-89ba-b5b821b7a825" containerName="installer" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.645669 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c1eb51c7-ee2f-4230-929d-62d6608eca89" containerName="oauth-openshift" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.645674 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1eb51c7-ee2f-4230-929d-62d6608eca89" containerName="oauth-openshift" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.645686 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="42760d54-52a5-4718-966c-b35aae39b112" containerName="route-controller-manager" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.645691 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="42760d54-52a5-4718-966c-b35aae39b112" containerName="route-controller-manager" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.645806 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="9ac12125-d091-4b8b-89ba-b5b821b7a825" containerName="installer" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.645817 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="42760d54-52a5-4718-966c-b35aae39b112" containerName="route-controller-manager" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.645825 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="c1eb51c7-ee2f-4230-929d-62d6608eca89" containerName="oauth-openshift" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.645834 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" containerName="controller-manager" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.689073 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.688818 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp"] Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.695094 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.695338 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.695417 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.695529 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.695606 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.695814 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.695837 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.695923 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.696005 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.696155 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.696395 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.696553 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.696721 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-696f58747-rn9mv"] Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.696906 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.696963 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.701719 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.702154 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct"] Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.702180 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp"] Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.702192 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-696f58747-rn9mv"] Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.702270 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.702773 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.704740 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.704965 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.705088 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc93e0e4-bd19-40d1-b43a-808b9f564704-serving-cert\") pod \"controller-manager-696f58747-rn9mv\" (UID: \"bc93e0e4-bd19-40d1-b43a-808b9f564704\") " pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.705144 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bc93e0e4-bd19-40d1-b43a-808b9f564704-tmp\") pod \"controller-manager-696f58747-rn9mv\" (UID: \"bc93e0e4-bd19-40d1-b43a-808b9f564704\") " pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.705185 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-user-template-error\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.705211 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc93e0e4-bd19-40d1-b43a-808b9f564704-proxy-ca-bundles\") pod \"controller-manager-696f58747-rn9mv\" (UID: \"bc93e0e4-bd19-40d1-b43a-808b9f564704\") " pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.705248 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.705271 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c9cba838-94d7-4cad-8a3c-6abeb8642586-audit-policies\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.705291 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/84122824-565f-493b-bad7-2a4237bab8db-tmp\") pod \"route-controller-manager-7cc79cdc68-j2wjp\" (UID: \"84122824-565f-493b-bad7-2a4237bab8db\") " pod="openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.705335 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.705381 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-system-router-certs\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.705406 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c9cba838-94d7-4cad-8a3c-6abeb8642586-audit-dir\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.705426 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-system-session\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.705461 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84122824-565f-493b-bad7-2a4237bab8db-client-ca\") pod \"route-controller-manager-7cc79cdc68-j2wjp\" (UID: \"84122824-565f-493b-bad7-2a4237bab8db\") " pod="openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.705665 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbt9l\" (UniqueName: \"kubernetes.io/projected/84122824-565f-493b-bad7-2a4237bab8db-kube-api-access-pbt9l\") pod \"route-controller-manager-7cc79cdc68-j2wjp\" (UID: \"84122824-565f-493b-bad7-2a4237bab8db\") " pod="openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.705681 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.705697 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.705717 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84122824-565f-493b-bad7-2a4237bab8db-serving-cert\") pod \"route-controller-manager-7cc79cdc68-j2wjp\" (UID: \"84122824-565f-493b-bad7-2a4237bab8db\") " pod="openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.705755 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84122824-565f-493b-bad7-2a4237bab8db-config\") pod \"route-controller-manager-7cc79cdc68-j2wjp\" (UID: \"84122824-565f-493b-bad7-2a4237bab8db\") " pod="openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.705780 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc93e0e4-bd19-40d1-b43a-808b9f564704-config\") pod \"controller-manager-696f58747-rn9mv\" (UID: \"bc93e0e4-bd19-40d1-b43a-808b9f564704\") " pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.705804 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-system-service-ca\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.705834 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.705863 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.705899 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq88w\" (UniqueName: \"kubernetes.io/projected/c9cba838-94d7-4cad-8a3c-6abeb8642586-kube-api-access-kq88w\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.705905 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.705920 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc93e0e4-bd19-40d1-b43a-808b9f564704-client-ca\") pod \"controller-manager-696f58747-rn9mv\" (UID: \"bc93e0e4-bd19-40d1-b43a-808b9f564704\") " pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.705953 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-624fx\" (UniqueName: \"kubernetes.io/projected/bc93e0e4-bd19-40d1-b43a-808b9f564704-kube-api-access-624fx\") pod \"controller-manager-696f58747-rn9mv\" (UID: \"bc93e0e4-bd19-40d1-b43a-808b9f564704\") " pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.705980 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.705999 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-user-template-login\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.706121 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.706239 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.706273 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.706676 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.707862 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.708050 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.708201 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.715309 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.718068 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.792847 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.807423 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.807516 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-user-template-login\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.807582 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc93e0e4-bd19-40d1-b43a-808b9f564704-serving-cert\") pod \"controller-manager-696f58747-rn9mv\" (UID: \"bc93e0e4-bd19-40d1-b43a-808b9f564704\") " pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.807622 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bc93e0e4-bd19-40d1-b43a-808b9f564704-tmp\") pod \"controller-manager-696f58747-rn9mv\" (UID: \"bc93e0e4-bd19-40d1-b43a-808b9f564704\") " pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.807686 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-user-template-error\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.807758 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc93e0e4-bd19-40d1-b43a-808b9f564704-proxy-ca-bundles\") pod \"controller-manager-696f58747-rn9mv\" (UID: \"bc93e0e4-bd19-40d1-b43a-808b9f564704\") " pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.807793 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.807819 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c9cba838-94d7-4cad-8a3c-6abeb8642586-audit-policies\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.807842 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/84122824-565f-493b-bad7-2a4237bab8db-tmp\") pod \"route-controller-manager-7cc79cdc68-j2wjp\" (UID: \"84122824-565f-493b-bad7-2a4237bab8db\") " pod="openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.807903 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.807940 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-system-router-certs\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.807974 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c9cba838-94d7-4cad-8a3c-6abeb8642586-audit-dir\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.808008 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-system-session\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.808041 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84122824-565f-493b-bad7-2a4237bab8db-client-ca\") pod \"route-controller-manager-7cc79cdc68-j2wjp\" (UID: \"84122824-565f-493b-bad7-2a4237bab8db\") " pod="openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.808072 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pbt9l\" (UniqueName: \"kubernetes.io/projected/84122824-565f-493b-bad7-2a4237bab8db-kube-api-access-pbt9l\") pod \"route-controller-manager-7cc79cdc68-j2wjp\" (UID: \"84122824-565f-493b-bad7-2a4237bab8db\") " pod="openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.808115 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.808152 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84122824-565f-493b-bad7-2a4237bab8db-serving-cert\") pod \"route-controller-manager-7cc79cdc68-j2wjp\" (UID: \"84122824-565f-493b-bad7-2a4237bab8db\") " pod="openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.808210 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84122824-565f-493b-bad7-2a4237bab8db-config\") pod \"route-controller-manager-7cc79cdc68-j2wjp\" (UID: \"84122824-565f-493b-bad7-2a4237bab8db\") " pod="openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.808247 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc93e0e4-bd19-40d1-b43a-808b9f564704-config\") pod \"controller-manager-696f58747-rn9mv\" (UID: \"bc93e0e4-bd19-40d1-b43a-808b9f564704\") " pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.808291 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-system-service-ca\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.808345 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.808382 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.808418 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kq88w\" (UniqueName: \"kubernetes.io/projected/c9cba838-94d7-4cad-8a3c-6abeb8642586-kube-api-access-kq88w\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.808460 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc93e0e4-bd19-40d1-b43a-808b9f564704-client-ca\") pod \"controller-manager-696f58747-rn9mv\" (UID: \"bc93e0e4-bd19-40d1-b43a-808b9f564704\") " pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.808511 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-624fx\" (UniqueName: \"kubernetes.io/projected/bc93e0e4-bd19-40d1-b43a-808b9f564704-kube-api-access-624fx\") pod \"controller-manager-696f58747-rn9mv\" (UID: \"bc93e0e4-bd19-40d1-b43a-808b9f564704\") " pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.808978 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bc93e0e4-bd19-40d1-b43a-808b9f564704-tmp\") pod \"controller-manager-696f58747-rn9mv\" (UID: \"bc93e0e4-bd19-40d1-b43a-808b9f564704\") " pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.809025 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/84122824-565f-493b-bad7-2a4237bab8db-tmp\") pod \"route-controller-manager-7cc79cdc68-j2wjp\" (UID: \"84122824-565f-493b-bad7-2a4237bab8db\") " pod="openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.810987 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c9cba838-94d7-4cad-8a3c-6abeb8642586-audit-policies\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.811239 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84122824-565f-493b-bad7-2a4237bab8db-client-ca\") pod \"route-controller-manager-7cc79cdc68-j2wjp\" (UID: \"84122824-565f-493b-bad7-2a4237bab8db\") " pod="openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.811379 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c9cba838-94d7-4cad-8a3c-6abeb8642586-audit-dir\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.811725 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc93e0e4-bd19-40d1-b43a-808b9f564704-proxy-ca-bundles\") pod \"controller-manager-696f58747-rn9mv\" (UID: \"bc93e0e4-bd19-40d1-b43a-808b9f564704\") " pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.812906 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-system-service-ca\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.813705 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc93e0e4-bd19-40d1-b43a-808b9f564704-config\") pod \"controller-manager-696f58747-rn9mv\" (UID: \"bc93e0e4-bd19-40d1-b43a-808b9f564704\") " pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.816518 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-system-router-certs\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.816522 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.816701 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc93e0e4-bd19-40d1-b43a-808b9f564704-serving-cert\") pod \"controller-manager-696f58747-rn9mv\" (UID: \"bc93e0e4-bd19-40d1-b43a-808b9f564704\") " pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.817589 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84122824-565f-493b-bad7-2a4237bab8db-config\") pod \"route-controller-manager-7cc79cdc68-j2wjp\" (UID: \"84122824-565f-493b-bad7-2a4237bab8db\") " pod="openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.817676 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.817872 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.818098 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-user-template-error\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.818991 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.819822 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-user-template-login\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.820671 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc93e0e4-bd19-40d1-b43a-808b9f564704-client-ca\") pod \"controller-manager-696f58747-rn9mv\" (UID: \"bc93e0e4-bd19-40d1-b43a-808b9f564704\") " pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.821645 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-system-session\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.823048 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.825544 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c9cba838-94d7-4cad-8a3c-6abeb8642586-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.826049 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84122824-565f-493b-bad7-2a4237bab8db-serving-cert\") pod \"route-controller-manager-7cc79cdc68-j2wjp\" (UID: \"84122824-565f-493b-bad7-2a4237bab8db\") " pod="openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.829413 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbt9l\" (UniqueName: \"kubernetes.io/projected/84122824-565f-493b-bad7-2a4237bab8db-kube-api-access-pbt9l\") pod \"route-controller-manager-7cc79cdc68-j2wjp\" (UID: \"84122824-565f-493b-bad7-2a4237bab8db\") " pod="openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.830482 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-624fx\" (UniqueName: \"kubernetes.io/projected/bc93e0e4-bd19-40d1-b43a-808b9f564704-kube-api-access-624fx\") pod \"controller-manager-696f58747-rn9mv\" (UID: \"bc93e0e4-bd19-40d1-b43a-808b9f564704\") " pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" Jan 26 00:15:07 crc kubenswrapper[5107]: I0126 00:15:07.835713 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq88w\" (UniqueName: \"kubernetes.io/projected/c9cba838-94d7-4cad-8a3c-6abeb8642586-kube-api-access-kq88w\") pod \"oauth-openshift-7fd89ccf9d-4fgct\" (UID: \"c9cba838-94d7-4cad-8a3c-6abeb8642586\") " pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.007002 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.007593 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.007899 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.021764 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.029497 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp" Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.037838 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.131043 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.131181 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42760d54-52a5-4718-966c-b35aae39b112" path="/var/lib/kubelet/pods/42760d54-52a5-4718-966c-b35aae39b112/volumes" Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.131384 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.132298 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b083abe4-5d92-474f-bc10-63c8174bb862" path="/var/lib/kubelet/pods/b083abe4-5d92-474f-bc10-63c8174bb862/volumes" Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.133059 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1eb51c7-ee2f-4230-929d-62d6608eca89" path="/var/lib/kubelet/pods/c1eb51c7-ee2f-4230-929d-62d6608eca89/volumes" Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.169148 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.185465 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.234630 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.316811 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp"] Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.338999 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-696f58747-rn9mv"] Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.344732 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct"] Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.420648 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.556984 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" event={"ID":"bc93e0e4-bd19-40d1-b43a-808b9f564704","Type":"ContainerStarted","Data":"b257910cb057973f0b4efee6c709d20ecb3dc091dbd2dfc930b12c497f20ed80"} Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.557103 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" event={"ID":"bc93e0e4-bd19-40d1-b43a-808b9f564704","Type":"ContainerStarted","Data":"16dab7b7ed239d4deb5f5ecd9915f29f38f181602edb068130b713cc755c78e7"} Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.557529 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.559567 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp" event={"ID":"84122824-565f-493b-bad7-2a4237bab8db","Type":"ContainerStarted","Data":"59b2d0889178cba5b62bedbe98690a46a76582e93db52ef5c9f7be4cb5ae770d"} Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.559619 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp" event={"ID":"84122824-565f-493b-bad7-2a4237bab8db","Type":"ContainerStarted","Data":"928fd58304aced75035418531005a7e014595861a218e9a71067b36f796d4f7f"} Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.559920 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp" Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.562108 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" event={"ID":"c9cba838-94d7-4cad-8a3c-6abeb8642586","Type":"ContainerStarted","Data":"188b0775b7e159ceb9404bd70717259bd0215c70efa74cadc64eb0a924eaa6a0"} Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.577277 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" podStartSLOduration=91.577256823 podStartE2EDuration="1m31.577256823s" podCreationTimestamp="2026-01-26 00:13:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:15:08.574086088 +0000 UTC m=+353.491680434" watchObservedRunningTime="2026-01-26 00:15:08.577256823 +0000 UTC m=+353.494851169" Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.591090 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp" podStartSLOduration=91.591074996 podStartE2EDuration="1m31.591074996s" podCreationTimestamp="2026-01-26 00:13:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:15:08.589725856 +0000 UTC m=+353.507320202" watchObservedRunningTime="2026-01-26 00:15:08.591074996 +0000 UTC m=+353.508669343" Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.747820 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.948238 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.977257 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 26 00:15:08 crc kubenswrapper[5107]: I0126 00:15:08.992686 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 26 00:15:09 crc kubenswrapper[5107]: I0126 00:15:09.043357 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp" Jan 26 00:15:09 crc kubenswrapper[5107]: I0126 00:15:09.145604 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:15:09 crc kubenswrapper[5107]: I0126 00:15:09.191333 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 26 00:15:09 crc kubenswrapper[5107]: I0126 00:15:09.372518 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" Jan 26 00:15:09 crc kubenswrapper[5107]: I0126 00:15:09.504158 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 26 00:15:09 crc kubenswrapper[5107]: I0126 00:15:09.575871 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" event={"ID":"c9cba838-94d7-4cad-8a3c-6abeb8642586","Type":"ContainerStarted","Data":"7440ad33e6fe4508cfc6b9d62151a5cec03231a858e8dd0fe37c0b7f7af32acf"} Jan 26 00:15:09 crc kubenswrapper[5107]: I0126 00:15:09.576631 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:09 crc kubenswrapper[5107]: I0126 00:15:09.577921 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 26 00:15:09 crc kubenswrapper[5107]: I0126 00:15:09.581532 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" Jan 26 00:15:09 crc kubenswrapper[5107]: I0126 00:15:09.588029 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 26 00:15:09 crc kubenswrapper[5107]: I0126 00:15:09.618540 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7fd89ccf9d-4fgct" podStartSLOduration=107.618521218 podStartE2EDuration="1m47.618521218s" podCreationTimestamp="2026-01-26 00:13:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:15:09.597280452 +0000 UTC m=+354.514874808" watchObservedRunningTime="2026-01-26 00:15:09.618521218 +0000 UTC m=+354.536115564" Jan 26 00:15:09 crc kubenswrapper[5107]: I0126 00:15:09.714017 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:15:09 crc kubenswrapper[5107]: I0126 00:15:09.824629 5107 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:15:09 crc kubenswrapper[5107]: I0126 00:15:09.903846 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 26 00:15:10 crc kubenswrapper[5107]: I0126 00:15:10.025852 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:15:10 crc kubenswrapper[5107]: I0126 00:15:10.343478 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 26 00:15:10 crc kubenswrapper[5107]: I0126 00:15:10.485965 5107 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:15:10 crc kubenswrapper[5107]: I0126 00:15:10.519289 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 26 00:15:10 crc kubenswrapper[5107]: I0126 00:15:10.617360 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 26 00:15:10 crc kubenswrapper[5107]: I0126 00:15:10.696127 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 26 00:15:10 crc kubenswrapper[5107]: I0126 00:15:10.696158 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 26 00:15:10 crc kubenswrapper[5107]: I0126 00:15:10.818846 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 26 00:15:10 crc kubenswrapper[5107]: I0126 00:15:10.833371 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 26 00:15:10 crc kubenswrapper[5107]: I0126 00:15:10.891068 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 26 00:15:11 crc kubenswrapper[5107]: I0126 00:15:11.041008 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:15:11 crc kubenswrapper[5107]: I0126 00:15:11.228900 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:15:11 crc kubenswrapper[5107]: I0126 00:15:11.229732 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:15:11 crc kubenswrapper[5107]: I0126 00:15:11.232864 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 26 00:15:11 crc kubenswrapper[5107]: I0126 00:15:11.236698 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:15:11 crc kubenswrapper[5107]: I0126 00:15:11.255613 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 26 00:15:11 crc kubenswrapper[5107]: I0126 00:15:11.458480 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 26 00:15:11 crc kubenswrapper[5107]: I0126 00:15:11.462592 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 26 00:15:11 crc kubenswrapper[5107]: I0126 00:15:11.516380 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 26 00:15:11 crc kubenswrapper[5107]: I0126 00:15:11.604940 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:15:11 crc kubenswrapper[5107]: I0126 00:15:11.886540 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 26 00:15:12 crc kubenswrapper[5107]: I0126 00:15:12.055108 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 26 00:15:12 crc kubenswrapper[5107]: I0126 00:15:12.204161 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 26 00:15:12 crc kubenswrapper[5107]: I0126 00:15:12.240122 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 26 00:15:12 crc kubenswrapper[5107]: I0126 00:15:12.252190 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 26 00:15:12 crc kubenswrapper[5107]: I0126 00:15:12.586984 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:15:12 crc kubenswrapper[5107]: I0126 00:15:12.812840 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 26 00:15:12 crc kubenswrapper[5107]: I0126 00:15:12.919635 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 26 00:15:12 crc kubenswrapper[5107]: I0126 00:15:12.938806 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 26 00:15:13 crc kubenswrapper[5107]: I0126 00:15:13.154681 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 26 00:15:13 crc kubenswrapper[5107]: I0126 00:15:13.175078 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 26 00:15:13 crc kubenswrapper[5107]: I0126 00:15:13.269309 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:15:13 crc kubenswrapper[5107]: I0126 00:15:13.789146 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 26 00:15:13 crc kubenswrapper[5107]: I0126 00:15:13.806266 5107 ???:1] "http: TLS handshake error from 192.168.126.11:32842: no serving certificate available for the kubelet" Jan 26 00:15:13 crc kubenswrapper[5107]: I0126 00:15:13.828709 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 26 00:15:14 crc kubenswrapper[5107]: I0126 00:15:14.088935 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 26 00:15:14 crc kubenswrapper[5107]: I0126 00:15:14.336154 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 26 00:15:14 crc kubenswrapper[5107]: I0126 00:15:14.376654 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 26 00:15:14 crc kubenswrapper[5107]: I0126 00:15:14.384207 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 26 00:15:14 crc kubenswrapper[5107]: I0126 00:15:14.484531 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 26 00:15:14 crc kubenswrapper[5107]: I0126 00:15:14.786552 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 26 00:15:14 crc kubenswrapper[5107]: I0126 00:15:14.832957 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 26 00:15:14 crc kubenswrapper[5107]: I0126 00:15:14.865447 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 26 00:15:14 crc kubenswrapper[5107]: I0126 00:15:14.924942 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 26 00:15:15 crc kubenswrapper[5107]: I0126 00:15:15.004715 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 26 00:15:15 crc kubenswrapper[5107]: I0126 00:15:15.155504 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 26 00:15:15 crc kubenswrapper[5107]: I0126 00:15:15.286501 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 26 00:15:15 crc kubenswrapper[5107]: I0126 00:15:15.694186 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 26 00:15:15 crc kubenswrapper[5107]: I0126 00:15:15.947673 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 26 00:15:16 crc kubenswrapper[5107]: I0126 00:15:16.199695 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 26 00:15:16 crc kubenswrapper[5107]: I0126 00:15:16.333611 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 26 00:15:16 crc kubenswrapper[5107]: I0126 00:15:16.599800 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 26 00:15:16 crc kubenswrapper[5107]: I0126 00:15:16.671233 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 26 00:15:16 crc kubenswrapper[5107]: I0126 00:15:16.731175 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 26 00:15:16 crc kubenswrapper[5107]: I0126 00:15:16.992935 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:15:17 crc kubenswrapper[5107]: I0126 00:15:17.615896 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 26 00:15:17 crc kubenswrapper[5107]: I0126 00:15:17.620325 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 26 00:15:17 crc kubenswrapper[5107]: I0126 00:15:17.791960 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 26 00:15:17 crc kubenswrapper[5107]: I0126 00:15:17.889025 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.112632 5107 scope.go:117] "RemoveContainer" containerID="e38dfb80bc684842805a934db57ac36d88492a6032b81a4bf3c4665a02c5918a" Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.138654 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489775-4q4nb"] Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.174482 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-696f58747-rn9mv"] Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.174562 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp"] Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.174726 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-4q4nb" Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.175055 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" podUID="bc93e0e4-bd19-40d1-b43a-808b9f564704" containerName="controller-manager" containerID="cri-o://b257910cb057973f0b4efee6c709d20ecb3dc091dbd2dfc930b12c497f20ed80" gracePeriod=30 Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.175321 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp" podUID="84122824-565f-493b-bad7-2a4237bab8db" containerName="route-controller-manager" containerID="cri-o://59b2d0889178cba5b62bedbe98690a46a76582e93db52ef5c9f7be4cb5ae770d" gracePeriod=30 Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.178263 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489775-4q4nb"] Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.193853 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.194046 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.221918 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14d95cb2-2011-463d-9f85-133376945921-config-volume\") pod \"collect-profiles-29489775-4q4nb\" (UID: \"14d95cb2-2011-463d-9f85-133376945921\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-4q4nb" Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.222429 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcmjn\" (UniqueName: \"kubernetes.io/projected/14d95cb2-2011-463d-9f85-133376945921-kube-api-access-xcmjn\") pod \"collect-profiles-29489775-4q4nb\" (UID: \"14d95cb2-2011-463d-9f85-133376945921\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-4q4nb" Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.222470 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14d95cb2-2011-463d-9f85-133376945921-secret-volume\") pod \"collect-profiles-29489775-4q4nb\" (UID: \"14d95cb2-2011-463d-9f85-133376945921\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-4q4nb" Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.276767 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.310056 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.324263 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14d95cb2-2011-463d-9f85-133376945921-secret-volume\") pod \"collect-profiles-29489775-4q4nb\" (UID: \"14d95cb2-2011-463d-9f85-133376945921\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-4q4nb" Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.324353 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14d95cb2-2011-463d-9f85-133376945921-config-volume\") pod \"collect-profiles-29489775-4q4nb\" (UID: \"14d95cb2-2011-463d-9f85-133376945921\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-4q4nb" Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.324414 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xcmjn\" (UniqueName: \"kubernetes.io/projected/14d95cb2-2011-463d-9f85-133376945921-kube-api-access-xcmjn\") pod \"collect-profiles-29489775-4q4nb\" (UID: \"14d95cb2-2011-463d-9f85-133376945921\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-4q4nb" Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.328189 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14d95cb2-2011-463d-9f85-133376945921-config-volume\") pod \"collect-profiles-29489775-4q4nb\" (UID: \"14d95cb2-2011-463d-9f85-133376945921\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-4q4nb" Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.337557 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14d95cb2-2011-463d-9f85-133376945921-secret-volume\") pod \"collect-profiles-29489775-4q4nb\" (UID: \"14d95cb2-2011-463d-9f85-133376945921\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-4q4nb" Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.354688 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcmjn\" (UniqueName: \"kubernetes.io/projected/14d95cb2-2011-463d-9f85-133376945921-kube-api-access-xcmjn\") pod \"collect-profiles-29489775-4q4nb\" (UID: \"14d95cb2-2011-463d-9f85-133376945921\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-4q4nb" Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.512050 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-4q4nb" Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.562582 5107 patch_prober.go:28] interesting pod/route-controller-manager-7cc79cdc68-j2wjp container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" start-of-body= Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.562662 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp" podUID="84122824-565f-493b-bad7-2a4237bab8db" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.640875 5107 generic.go:358] "Generic (PLEG): container finished" podID="bc93e0e4-bd19-40d1-b43a-808b9f564704" containerID="b257910cb057973f0b4efee6c709d20ecb3dc091dbd2dfc930b12c497f20ed80" exitCode=0 Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.640948 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" event={"ID":"bc93e0e4-bd19-40d1-b43a-808b9f564704","Type":"ContainerDied","Data":"b257910cb057973f0b4efee6c709d20ecb3dc091dbd2dfc930b12c497f20ed80"} Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.645828 5107 generic.go:358] "Generic (PLEG): container finished" podID="84122824-565f-493b-bad7-2a4237bab8db" containerID="59b2d0889178cba5b62bedbe98690a46a76582e93db52ef5c9f7be4cb5ae770d" exitCode=0 Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.645902 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp" event={"ID":"84122824-565f-493b-bad7-2a4237bab8db","Type":"ContainerDied","Data":"59b2d0889178cba5b62bedbe98690a46a76582e93db52ef5c9f7be4cb5ae770d"} Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.647958 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-59jn5_d93df320-4284-49f0-b63d-ba8a86943f2e/marketplace-operator/1.log" Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.648084 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" event={"ID":"d93df320-4284-49f0-b63d-ba8a86943f2e","Type":"ContainerStarted","Data":"ca9dec53aa7c93c365f90b547feb255a966e3d662679cdaef2eab8637f7f82e9"} Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.648492 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.650283 5107 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-59jn5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.651424 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" podUID="d93df320-4284-49f0-b63d-ba8a86943f2e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.779870 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.907576 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.954900 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:15:18 crc kubenswrapper[5107]: I0126 00:15:18.991970 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.035095 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84122824-565f-493b-bad7-2a4237bab8db-config\") pod \"84122824-565f-493b-bad7-2a4237bab8db\" (UID: \"84122824-565f-493b-bad7-2a4237bab8db\") " Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.035284 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/84122824-565f-493b-bad7-2a4237bab8db-tmp\") pod \"84122824-565f-493b-bad7-2a4237bab8db\" (UID: \"84122824-565f-493b-bad7-2a4237bab8db\") " Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.035330 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbt9l\" (UniqueName: \"kubernetes.io/projected/84122824-565f-493b-bad7-2a4237bab8db-kube-api-access-pbt9l\") pod \"84122824-565f-493b-bad7-2a4237bab8db\" (UID: \"84122824-565f-493b-bad7-2a4237bab8db\") " Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.035406 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84122824-565f-493b-bad7-2a4237bab8db-serving-cert\") pod \"84122824-565f-493b-bad7-2a4237bab8db\" (UID: \"84122824-565f-493b-bad7-2a4237bab8db\") " Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.035466 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84122824-565f-493b-bad7-2a4237bab8db-client-ca\") pod \"84122824-565f-493b-bad7-2a4237bab8db\" (UID: \"84122824-565f-493b-bad7-2a4237bab8db\") " Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.036403 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84122824-565f-493b-bad7-2a4237bab8db-tmp" (OuterVolumeSpecName: "tmp") pod "84122824-565f-493b-bad7-2a4237bab8db" (UID: "84122824-565f-493b-bad7-2a4237bab8db"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.036962 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84122824-565f-493b-bad7-2a4237bab8db-config" (OuterVolumeSpecName: "config") pod "84122824-565f-493b-bad7-2a4237bab8db" (UID: "84122824-565f-493b-bad7-2a4237bab8db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.037664 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84122824-565f-493b-bad7-2a4237bab8db-client-ca" (OuterVolumeSpecName: "client-ca") pod "84122824-565f-493b-bad7-2a4237bab8db" (UID: "84122824-565f-493b-bad7-2a4237bab8db"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.037871 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/84122824-565f-493b-bad7-2a4237bab8db-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.037911 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84122824-565f-493b-bad7-2a4237bab8db-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.037925 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84122824-565f-493b-bad7-2a4237bab8db-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.042025 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84122824-565f-493b-bad7-2a4237bab8db-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "84122824-565f-493b-bad7-2a4237bab8db" (UID: "84122824-565f-493b-bad7-2a4237bab8db"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.048167 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84122824-565f-493b-bad7-2a4237bab8db-kube-api-access-pbt9l" (OuterVolumeSpecName: "kube-api-access-pbt9l") pod "84122824-565f-493b-bad7-2a4237bab8db" (UID: "84122824-565f-493b-bad7-2a4237bab8db"). InnerVolumeSpecName "kube-api-access-pbt9l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.072578 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl"] Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.073404 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="84122824-565f-493b-bad7-2a4237bab8db" containerName="route-controller-manager" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.073430 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="84122824-565f-493b-bad7-2a4237bab8db" containerName="route-controller-manager" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.073551 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="84122824-565f-493b-bad7-2a4237bab8db" containerName="route-controller-manager" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.075162 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.122337 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl"] Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.122416 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb"] Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.122440 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.126491 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bc93e0e4-bd19-40d1-b43a-808b9f564704" containerName="controller-manager" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.126549 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc93e0e4-bd19-40d1-b43a-808b9f564704" containerName="controller-manager" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.126680 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="bc93e0e4-bd19-40d1-b43a-808b9f564704" containerName="controller-manager" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.130458 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489775-4q4nb"] Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.130636 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.132426 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb"] Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.139941 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc93e0e4-bd19-40d1-b43a-808b9f564704-client-ca\") pod \"bc93e0e4-bd19-40d1-b43a-808b9f564704\" (UID: \"bc93e0e4-bd19-40d1-b43a-808b9f564704\") " Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.140079 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc93e0e4-bd19-40d1-b43a-808b9f564704-proxy-ca-bundles\") pod \"bc93e0e4-bd19-40d1-b43a-808b9f564704\" (UID: \"bc93e0e4-bd19-40d1-b43a-808b9f564704\") " Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.140286 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bc93e0e4-bd19-40d1-b43a-808b9f564704-tmp\") pod \"bc93e0e4-bd19-40d1-b43a-808b9f564704\" (UID: \"bc93e0e4-bd19-40d1-b43a-808b9f564704\") " Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.140374 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc93e0e4-bd19-40d1-b43a-808b9f564704-serving-cert\") pod \"bc93e0e4-bd19-40d1-b43a-808b9f564704\" (UID: \"bc93e0e4-bd19-40d1-b43a-808b9f564704\") " Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.140544 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc93e0e4-bd19-40d1-b43a-808b9f564704-config\") pod \"bc93e0e4-bd19-40d1-b43a-808b9f564704\" (UID: \"bc93e0e4-bd19-40d1-b43a-808b9f564704\") " Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.140677 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-624fx\" (UniqueName: \"kubernetes.io/projected/bc93e0e4-bd19-40d1-b43a-808b9f564704-kube-api-access-624fx\") pod \"bc93e0e4-bd19-40d1-b43a-808b9f564704\" (UID: \"bc93e0e4-bd19-40d1-b43a-808b9f564704\") " Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.140966 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/63ee00ea-2175-4a9f-93da-56d5595a18e8-proxy-ca-bundles\") pod \"controller-manager-7fdd5659c7-5n8jb\" (UID: \"63ee00ea-2175-4a9f-93da-56d5595a18e8\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.141036 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63ee00ea-2175-4a9f-93da-56d5595a18e8-config\") pod \"controller-manager-7fdd5659c7-5n8jb\" (UID: \"63ee00ea-2175-4a9f-93da-56d5595a18e8\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.141183 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aedd35a-04be-4f93-9d36-fff367952583-serving-cert\") pod \"route-controller-manager-55c4558f4d-r9mnl\" (UID: \"9aedd35a-04be-4f93-9d36-fff367952583\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.141239 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9aedd35a-04be-4f93-9d36-fff367952583-tmp\") pod \"route-controller-manager-55c4558f4d-r9mnl\" (UID: \"9aedd35a-04be-4f93-9d36-fff367952583\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.141299 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63ee00ea-2175-4a9f-93da-56d5595a18e8-serving-cert\") pod \"controller-manager-7fdd5659c7-5n8jb\" (UID: \"63ee00ea-2175-4a9f-93da-56d5595a18e8\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.141345 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lghqz\" (UniqueName: \"kubernetes.io/projected/9aedd35a-04be-4f93-9d36-fff367952583-kube-api-access-lghqz\") pod \"route-controller-manager-55c4558f4d-r9mnl\" (UID: \"9aedd35a-04be-4f93-9d36-fff367952583\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.141379 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aedd35a-04be-4f93-9d36-fff367952583-config\") pod \"route-controller-manager-55c4558f4d-r9mnl\" (UID: \"9aedd35a-04be-4f93-9d36-fff367952583\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.141528 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63ee00ea-2175-4a9f-93da-56d5595a18e8-client-ca\") pod \"controller-manager-7fdd5659c7-5n8jb\" (UID: \"63ee00ea-2175-4a9f-93da-56d5595a18e8\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.141573 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9aedd35a-04be-4f93-9d36-fff367952583-client-ca\") pod \"route-controller-manager-55c4558f4d-r9mnl\" (UID: \"9aedd35a-04be-4f93-9d36-fff367952583\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.141599 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h65v9\" (UniqueName: \"kubernetes.io/projected/63ee00ea-2175-4a9f-93da-56d5595a18e8-kube-api-access-h65v9\") pod \"controller-manager-7fdd5659c7-5n8jb\" (UID: \"63ee00ea-2175-4a9f-93da-56d5595a18e8\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.141750 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/63ee00ea-2175-4a9f-93da-56d5595a18e8-tmp\") pod \"controller-manager-7fdd5659c7-5n8jb\" (UID: \"63ee00ea-2175-4a9f-93da-56d5595a18e8\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.141927 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pbt9l\" (UniqueName: \"kubernetes.io/projected/84122824-565f-493b-bad7-2a4237bab8db-kube-api-access-pbt9l\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.141950 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84122824-565f-493b-bad7-2a4237bab8db-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.142109 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc93e0e4-bd19-40d1-b43a-808b9f564704-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bc93e0e4-bd19-40d1-b43a-808b9f564704" (UID: "bc93e0e4-bd19-40d1-b43a-808b9f564704"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.142630 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc93e0e4-bd19-40d1-b43a-808b9f564704-client-ca" (OuterVolumeSpecName: "client-ca") pod "bc93e0e4-bd19-40d1-b43a-808b9f564704" (UID: "bc93e0e4-bd19-40d1-b43a-808b9f564704"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.143029 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc93e0e4-bd19-40d1-b43a-808b9f564704-tmp" (OuterVolumeSpecName: "tmp") pod "bc93e0e4-bd19-40d1-b43a-808b9f564704" (UID: "bc93e0e4-bd19-40d1-b43a-808b9f564704"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.145104 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc93e0e4-bd19-40d1-b43a-808b9f564704-config" (OuterVolumeSpecName: "config") pod "bc93e0e4-bd19-40d1-b43a-808b9f564704" (UID: "bc93e0e4-bd19-40d1-b43a-808b9f564704"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.159224 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc93e0e4-bd19-40d1-b43a-808b9f564704-kube-api-access-624fx" (OuterVolumeSpecName: "kube-api-access-624fx") pod "bc93e0e4-bd19-40d1-b43a-808b9f564704" (UID: "bc93e0e4-bd19-40d1-b43a-808b9f564704"). InnerVolumeSpecName "kube-api-access-624fx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.162258 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc93e0e4-bd19-40d1-b43a-808b9f564704-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc93e0e4-bd19-40d1-b43a-808b9f564704" (UID: "bc93e0e4-bd19-40d1-b43a-808b9f564704"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.424434 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aedd35a-04be-4f93-9d36-fff367952583-serving-cert\") pod \"route-controller-manager-55c4558f4d-r9mnl\" (UID: \"9aedd35a-04be-4f93-9d36-fff367952583\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.424501 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9aedd35a-04be-4f93-9d36-fff367952583-tmp\") pod \"route-controller-manager-55c4558f4d-r9mnl\" (UID: \"9aedd35a-04be-4f93-9d36-fff367952583\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.424526 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63ee00ea-2175-4a9f-93da-56d5595a18e8-serving-cert\") pod \"controller-manager-7fdd5659c7-5n8jb\" (UID: \"63ee00ea-2175-4a9f-93da-56d5595a18e8\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.424552 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lghqz\" (UniqueName: \"kubernetes.io/projected/9aedd35a-04be-4f93-9d36-fff367952583-kube-api-access-lghqz\") pod \"route-controller-manager-55c4558f4d-r9mnl\" (UID: \"9aedd35a-04be-4f93-9d36-fff367952583\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.424579 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aedd35a-04be-4f93-9d36-fff367952583-config\") pod \"route-controller-manager-55c4558f4d-r9mnl\" (UID: \"9aedd35a-04be-4f93-9d36-fff367952583\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.424620 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63ee00ea-2175-4a9f-93da-56d5595a18e8-client-ca\") pod \"controller-manager-7fdd5659c7-5n8jb\" (UID: \"63ee00ea-2175-4a9f-93da-56d5595a18e8\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.424655 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9aedd35a-04be-4f93-9d36-fff367952583-client-ca\") pod \"route-controller-manager-55c4558f4d-r9mnl\" (UID: \"9aedd35a-04be-4f93-9d36-fff367952583\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.424685 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h65v9\" (UniqueName: \"kubernetes.io/projected/63ee00ea-2175-4a9f-93da-56d5595a18e8-kube-api-access-h65v9\") pod \"controller-manager-7fdd5659c7-5n8jb\" (UID: \"63ee00ea-2175-4a9f-93da-56d5595a18e8\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.424744 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/63ee00ea-2175-4a9f-93da-56d5595a18e8-tmp\") pod \"controller-manager-7fdd5659c7-5n8jb\" (UID: \"63ee00ea-2175-4a9f-93da-56d5595a18e8\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.424796 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/63ee00ea-2175-4a9f-93da-56d5595a18e8-proxy-ca-bundles\") pod \"controller-manager-7fdd5659c7-5n8jb\" (UID: \"63ee00ea-2175-4a9f-93da-56d5595a18e8\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.424826 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63ee00ea-2175-4a9f-93da-56d5595a18e8-config\") pod \"controller-manager-7fdd5659c7-5n8jb\" (UID: \"63ee00ea-2175-4a9f-93da-56d5595a18e8\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.424903 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bc93e0e4-bd19-40d1-b43a-808b9f564704-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.424920 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc93e0e4-bd19-40d1-b43a-808b9f564704-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.425093 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc93e0e4-bd19-40d1-b43a-808b9f564704-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.425105 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-624fx\" (UniqueName: \"kubernetes.io/projected/bc93e0e4-bd19-40d1-b43a-808b9f564704-kube-api-access-624fx\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.425115 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc93e0e4-bd19-40d1-b43a-808b9f564704-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.425125 5107 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc93e0e4-bd19-40d1-b43a-808b9f564704-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.442411 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aedd35a-04be-4f93-9d36-fff367952583-config\") pod \"route-controller-manager-55c4558f4d-r9mnl\" (UID: \"9aedd35a-04be-4f93-9d36-fff367952583\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.443300 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aedd35a-04be-4f93-9d36-fff367952583-serving-cert\") pod \"route-controller-manager-55c4558f4d-r9mnl\" (UID: \"9aedd35a-04be-4f93-9d36-fff367952583\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.444843 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9aedd35a-04be-4f93-9d36-fff367952583-tmp\") pod \"route-controller-manager-55c4558f4d-r9mnl\" (UID: \"9aedd35a-04be-4f93-9d36-fff367952583\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.445011 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63ee00ea-2175-4a9f-93da-56d5595a18e8-client-ca\") pod \"controller-manager-7fdd5659c7-5n8jb\" (UID: \"63ee00ea-2175-4a9f-93da-56d5595a18e8\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.445680 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9aedd35a-04be-4f93-9d36-fff367952583-client-ca\") pod \"route-controller-manager-55c4558f4d-r9mnl\" (UID: \"9aedd35a-04be-4f93-9d36-fff367952583\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.448318 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/63ee00ea-2175-4a9f-93da-56d5595a18e8-tmp\") pod \"controller-manager-7fdd5659c7-5n8jb\" (UID: \"63ee00ea-2175-4a9f-93da-56d5595a18e8\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.449121 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63ee00ea-2175-4a9f-93da-56d5595a18e8-config\") pod \"controller-manager-7fdd5659c7-5n8jb\" (UID: \"63ee00ea-2175-4a9f-93da-56d5595a18e8\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.449249 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/63ee00ea-2175-4a9f-93da-56d5595a18e8-proxy-ca-bundles\") pod \"controller-manager-7fdd5659c7-5n8jb\" (UID: \"63ee00ea-2175-4a9f-93da-56d5595a18e8\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.454172 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63ee00ea-2175-4a9f-93da-56d5595a18e8-serving-cert\") pod \"controller-manager-7fdd5659c7-5n8jb\" (UID: \"63ee00ea-2175-4a9f-93da-56d5595a18e8\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.467548 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lghqz\" (UniqueName: \"kubernetes.io/projected/9aedd35a-04be-4f93-9d36-fff367952583-kube-api-access-lghqz\") pod \"route-controller-manager-55c4558f4d-r9mnl\" (UID: \"9aedd35a-04be-4f93-9d36-fff367952583\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.474685 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h65v9\" (UniqueName: \"kubernetes.io/projected/63ee00ea-2175-4a9f-93da-56d5595a18e8-kube-api-access-h65v9\") pod \"controller-manager-7fdd5659c7-5n8jb\" (UID: \"63ee00ea-2175-4a9f-93da-56d5595a18e8\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.501003 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.526877 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.560249 5107 patch_prober.go:28] interesting pod/controller-manager-696f58747-rn9mv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.560406 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" podUID="bc93e0e4-bd19-40d1-b43a-808b9f564704" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.669208 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-4q4nb" event={"ID":"14d95cb2-2011-463d-9f85-133376945921","Type":"ContainerStarted","Data":"f476ce2335ac662fc00353d601430b2cb2b6b0e7e79c9a34e592f153364b5a65"} Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.669261 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-4q4nb" event={"ID":"14d95cb2-2011-463d-9f85-133376945921","Type":"ContainerStarted","Data":"018df3627ed04e68f68f4a64a1bb59e348003aa985f0e4618b90431a8de1bf3a"} Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.673662 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.674697 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-696f58747-rn9mv" event={"ID":"bc93e0e4-bd19-40d1-b43a-808b9f564704","Type":"ContainerDied","Data":"16dab7b7ed239d4deb5f5ecd9915f29f38f181602edb068130b713cc755c78e7"} Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.674748 5107 scope.go:117] "RemoveContainer" containerID="b257910cb057973f0b4efee6c709d20ecb3dc091dbd2dfc930b12c497f20ed80" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.686322 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.686378 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp" event={"ID":"84122824-565f-493b-bad7-2a4237bab8db","Type":"ContainerDied","Data":"928fd58304aced75035418531005a7e014595861a218e9a71067b36f796d4f7f"} Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.708705 5107 scope.go:117] "RemoveContainer" containerID="59b2d0889178cba5b62bedbe98690a46a76582e93db52ef5c9f7be4cb5ae770d" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.774928 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.861425 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.868165 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-4q4nb" podStartSLOduration=1.868139403 podStartE2EDuration="1.868139403s" podCreationTimestamp="2026-01-26 00:15:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:15:19.789522859 +0000 UTC m=+364.707117205" watchObservedRunningTime="2026-01-26 00:15:19.868139403 +0000 UTC m=+364.785733749" Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.878613 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp"] Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.885348 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cc79cdc68-j2wjp"] Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.894935 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-696f58747-rn9mv"] Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.899808 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-696f58747-rn9mv"] Jan 26 00:15:19 crc kubenswrapper[5107]: W0126 00:15:19.996315 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9aedd35a_04be_4f93_9d36_fff367952583.slice/crio-e89caf4407a85ea2f259d0e57e17063dd0fe1dcac5d2d0e10309dde724dab347 WatchSource:0}: Error finding container e89caf4407a85ea2f259d0e57e17063dd0fe1dcac5d2d0e10309dde724dab347: Status 404 returned error can't find the container with id e89caf4407a85ea2f259d0e57e17063dd0fe1dcac5d2d0e10309dde724dab347 Jan 26 00:15:19 crc kubenswrapper[5107]: I0126 00:15:19.996402 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl"] Jan 26 00:15:20 crc kubenswrapper[5107]: I0126 00:15:20.031276 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb"] Jan 26 00:15:20 crc kubenswrapper[5107]: W0126 00:15:20.069626 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63ee00ea_2175_4a9f_93da_56d5595a18e8.slice/crio-db8eadccefc3ac0eaaabc67c68793bcc1eae111811a6a1b52c8698b70042e04b WatchSource:0}: Error finding container db8eadccefc3ac0eaaabc67c68793bcc1eae111811a6a1b52c8698b70042e04b: Status 404 returned error can't find the container with id db8eadccefc3ac0eaaabc67c68793bcc1eae111811a6a1b52c8698b70042e04b Jan 26 00:15:20 crc kubenswrapper[5107]: I0126 00:15:20.123123 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84122824-565f-493b-bad7-2a4237bab8db" path="/var/lib/kubelet/pods/84122824-565f-493b-bad7-2a4237bab8db/volumes" Jan 26 00:15:20 crc kubenswrapper[5107]: I0126 00:15:20.123800 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc93e0e4-bd19-40d1-b43a-808b9f564704" path="/var/lib/kubelet/pods/bc93e0e4-bd19-40d1-b43a-808b9f564704/volumes" Jan 26 00:15:20 crc kubenswrapper[5107]: I0126 00:15:20.146123 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 26 00:15:20 crc kubenswrapper[5107]: I0126 00:15:20.242650 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 26 00:15:20 crc kubenswrapper[5107]: I0126 00:15:20.693984 5107 generic.go:358] "Generic (PLEG): container finished" podID="14d95cb2-2011-463d-9f85-133376945921" containerID="f476ce2335ac662fc00353d601430b2cb2b6b0e7e79c9a34e592f153364b5a65" exitCode=0 Jan 26 00:15:20 crc kubenswrapper[5107]: I0126 00:15:20.694098 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-4q4nb" event={"ID":"14d95cb2-2011-463d-9f85-133376945921","Type":"ContainerDied","Data":"f476ce2335ac662fc00353d601430b2cb2b6b0e7e79c9a34e592f153364b5a65"} Jan 26 00:15:20 crc kubenswrapper[5107]: I0126 00:15:20.696281 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 26 00:15:20 crc kubenswrapper[5107]: I0126 00:15:20.696913 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" event={"ID":"63ee00ea-2175-4a9f-93da-56d5595a18e8","Type":"ContainerStarted","Data":"68ff0fe33449e69127f550163cdbd740b1ad553c2b2004f8e9861c24ea706927"} Jan 26 00:15:20 crc kubenswrapper[5107]: I0126 00:15:20.696950 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" event={"ID":"63ee00ea-2175-4a9f-93da-56d5595a18e8","Type":"ContainerStarted","Data":"db8eadccefc3ac0eaaabc67c68793bcc1eae111811a6a1b52c8698b70042e04b"} Jan 26 00:15:20 crc kubenswrapper[5107]: I0126 00:15:20.697206 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" Jan 26 00:15:20 crc kubenswrapper[5107]: I0126 00:15:20.699626 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl" event={"ID":"9aedd35a-04be-4f93-9d36-fff367952583","Type":"ContainerStarted","Data":"81a539334cb30e44d1b82a90d79c4a97560b285cba33ad1ca35129131a11ed36"} Jan 26 00:15:20 crc kubenswrapper[5107]: I0126 00:15:20.699674 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl" event={"ID":"9aedd35a-04be-4f93-9d36-fff367952583","Type":"ContainerStarted","Data":"e89caf4407a85ea2f259d0e57e17063dd0fe1dcac5d2d0e10309dde724dab347"} Jan 26 00:15:20 crc kubenswrapper[5107]: I0126 00:15:20.699787 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl" Jan 26 00:15:20 crc kubenswrapper[5107]: I0126 00:15:20.706198 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl" Jan 26 00:15:20 crc kubenswrapper[5107]: I0126 00:15:20.733979 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" podStartSLOduration=2.733956363 podStartE2EDuration="2.733956363s" podCreationTimestamp="2026-01-26 00:15:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:15:20.728342545 +0000 UTC m=+365.645936911" watchObservedRunningTime="2026-01-26 00:15:20.733956363 +0000 UTC m=+365.651550719" Jan 26 00:15:20 crc kubenswrapper[5107]: I0126 00:15:20.752803 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl" podStartSLOduration=2.752783637 podStartE2EDuration="2.752783637s" podCreationTimestamp="2026-01-26 00:15:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:15:20.74953874 +0000 UTC m=+365.667133086" watchObservedRunningTime="2026-01-26 00:15:20.752783637 +0000 UTC m=+365.670377983" Jan 26 00:15:20 crc kubenswrapper[5107]: I0126 00:15:20.945088 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" Jan 26 00:15:21 crc kubenswrapper[5107]: I0126 00:15:21.077179 5107 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 00:15:21 crc kubenswrapper[5107]: I0126 00:15:21.077520 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://853034021ff13ae75ff8cbeeab672cc8d2153238b807c7ba2f05288f5ff7798d" gracePeriod=5 Jan 26 00:15:21 crc kubenswrapper[5107]: I0126 00:15:21.462396 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 26 00:15:21 crc kubenswrapper[5107]: I0126 00:15:21.733019 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 26 00:15:21 crc kubenswrapper[5107]: I0126 00:15:21.942043 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-4q4nb" Jan 26 00:15:22 crc kubenswrapper[5107]: I0126 00:15:22.002476 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14d95cb2-2011-463d-9f85-133376945921-config-volume\") pod \"14d95cb2-2011-463d-9f85-133376945921\" (UID: \"14d95cb2-2011-463d-9f85-133376945921\") " Jan 26 00:15:22 crc kubenswrapper[5107]: I0126 00:15:22.002585 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14d95cb2-2011-463d-9f85-133376945921-secret-volume\") pod \"14d95cb2-2011-463d-9f85-133376945921\" (UID: \"14d95cb2-2011-463d-9f85-133376945921\") " Jan 26 00:15:22 crc kubenswrapper[5107]: I0126 00:15:22.002642 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcmjn\" (UniqueName: \"kubernetes.io/projected/14d95cb2-2011-463d-9f85-133376945921-kube-api-access-xcmjn\") pod \"14d95cb2-2011-463d-9f85-133376945921\" (UID: \"14d95cb2-2011-463d-9f85-133376945921\") " Jan 26 00:15:22 crc kubenswrapper[5107]: I0126 00:15:22.003328 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14d95cb2-2011-463d-9f85-133376945921-config-volume" (OuterVolumeSpecName: "config-volume") pod "14d95cb2-2011-463d-9f85-133376945921" (UID: "14d95cb2-2011-463d-9f85-133376945921"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:22 crc kubenswrapper[5107]: I0126 00:15:22.008971 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14d95cb2-2011-463d-9f85-133376945921-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "14d95cb2-2011-463d-9f85-133376945921" (UID: "14d95cb2-2011-463d-9f85-133376945921"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:15:22 crc kubenswrapper[5107]: I0126 00:15:22.011226 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14d95cb2-2011-463d-9f85-133376945921-kube-api-access-xcmjn" (OuterVolumeSpecName: "kube-api-access-xcmjn") pod "14d95cb2-2011-463d-9f85-133376945921" (UID: "14d95cb2-2011-463d-9f85-133376945921"). InnerVolumeSpecName "kube-api-access-xcmjn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:15:22 crc kubenswrapper[5107]: I0126 00:15:22.113333 5107 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14d95cb2-2011-463d-9f85-133376945921-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:22 crc kubenswrapper[5107]: I0126 00:15:22.113396 5107 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14d95cb2-2011-463d-9f85-133376945921-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:22 crc kubenswrapper[5107]: I0126 00:15:22.113410 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xcmjn\" (UniqueName: \"kubernetes.io/projected/14d95cb2-2011-463d-9f85-133376945921-kube-api-access-xcmjn\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:22 crc kubenswrapper[5107]: I0126 00:15:22.719131 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-4q4nb" Jan 26 00:15:22 crc kubenswrapper[5107]: I0126 00:15:22.719136 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-4q4nb" event={"ID":"14d95cb2-2011-463d-9f85-133376945921","Type":"ContainerDied","Data":"018df3627ed04e68f68f4a64a1bb59e348003aa985f0e4618b90431a8de1bf3a"} Jan 26 00:15:22 crc kubenswrapper[5107]: I0126 00:15:22.719208 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="018df3627ed04e68f68f4a64a1bb59e348003aa985f0e4618b90431a8de1bf3a" Jan 26 00:15:23 crc kubenswrapper[5107]: I0126 00:15:23.798337 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 26 00:15:24 crc kubenswrapper[5107]: I0126 00:15:24.262479 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 26 00:15:26 crc kubenswrapper[5107]: I0126 00:15:26.657361 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 26 00:15:26 crc kubenswrapper[5107]: I0126 00:15:26.657770 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:15:26 crc kubenswrapper[5107]: I0126 00:15:26.754293 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 26 00:15:26 crc kubenswrapper[5107]: I0126 00:15:26.754351 5107 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="853034021ff13ae75ff8cbeeab672cc8d2153238b807c7ba2f05288f5ff7798d" exitCode=137 Jan 26 00:15:26 crc kubenswrapper[5107]: I0126 00:15:26.754484 5107 scope.go:117] "RemoveContainer" containerID="853034021ff13ae75ff8cbeeab672cc8d2153238b807c7ba2f05288f5ff7798d" Jan 26 00:15:26 crc kubenswrapper[5107]: I0126 00:15:26.754559 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:15:26 crc kubenswrapper[5107]: I0126 00:15:26.782684 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 26 00:15:26 crc kubenswrapper[5107]: I0126 00:15:26.782753 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 26 00:15:26 crc kubenswrapper[5107]: I0126 00:15:26.782789 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 26 00:15:26 crc kubenswrapper[5107]: I0126 00:15:26.782863 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 26 00:15:26 crc kubenswrapper[5107]: I0126 00:15:26.782855 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:15:26 crc kubenswrapper[5107]: I0126 00:15:26.782969 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 26 00:15:26 crc kubenswrapper[5107]: I0126 00:15:26.783003 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:15:26 crc kubenswrapper[5107]: I0126 00:15:26.783014 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:15:26 crc kubenswrapper[5107]: I0126 00:15:26.783109 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:15:26 crc kubenswrapper[5107]: I0126 00:15:26.783303 5107 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:26 crc kubenswrapper[5107]: I0126 00:15:26.783320 5107 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:26 crc kubenswrapper[5107]: I0126 00:15:26.783331 5107 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:26 crc kubenswrapper[5107]: I0126 00:15:26.783341 5107 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:26 crc kubenswrapper[5107]: I0126 00:15:26.786795 5107 scope.go:117] "RemoveContainer" containerID="853034021ff13ae75ff8cbeeab672cc8d2153238b807c7ba2f05288f5ff7798d" Jan 26 00:15:26 crc kubenswrapper[5107]: E0126 00:15:26.789283 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"853034021ff13ae75ff8cbeeab672cc8d2153238b807c7ba2f05288f5ff7798d\": container with ID starting with 853034021ff13ae75ff8cbeeab672cc8d2153238b807c7ba2f05288f5ff7798d not found: ID does not exist" containerID="853034021ff13ae75ff8cbeeab672cc8d2153238b807c7ba2f05288f5ff7798d" Jan 26 00:15:26 crc kubenswrapper[5107]: I0126 00:15:26.789331 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"853034021ff13ae75ff8cbeeab672cc8d2153238b807c7ba2f05288f5ff7798d"} err="failed to get container status \"853034021ff13ae75ff8cbeeab672cc8d2153238b807c7ba2f05288f5ff7798d\": rpc error: code = NotFound desc = could not find container \"853034021ff13ae75ff8cbeeab672cc8d2153238b807c7ba2f05288f5ff7798d\": container with ID starting with 853034021ff13ae75ff8cbeeab672cc8d2153238b807c7ba2f05288f5ff7798d not found: ID does not exist" Jan 26 00:15:26 crc kubenswrapper[5107]: I0126 00:15:26.796234 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:15:26 crc kubenswrapper[5107]: I0126 00:15:26.885095 5107 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:28 crc kubenswrapper[5107]: I0126 00:15:28.121089 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Jan 26 00:15:28 crc kubenswrapper[5107]: I0126 00:15:28.121322 5107 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 26 00:15:28 crc kubenswrapper[5107]: I0126 00:15:28.132733 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 00:15:28 crc kubenswrapper[5107]: I0126 00:15:28.132826 5107 kubelet.go:2759] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="bfafedad-c24f-4a13-89ab-78dc05d559da" Jan 26 00:15:28 crc kubenswrapper[5107]: I0126 00:15:28.136527 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 00:15:28 crc kubenswrapper[5107]: I0126 00:15:28.136556 5107 kubelet.go:2784] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="bfafedad-c24f-4a13-89ab-78dc05d559da" Jan 26 00:15:28 crc kubenswrapper[5107]: I0126 00:15:28.295614 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 26 00:15:29 crc kubenswrapper[5107]: I0126 00:15:29.635323 5107 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 26 00:15:34 crc kubenswrapper[5107]: I0126 00:15:34.882042 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 26 00:15:36 crc kubenswrapper[5107]: I0126 00:15:36.093769 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.118070 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb"] Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.119439 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" podUID="63ee00ea-2175-4a9f-93da-56d5595a18e8" containerName="controller-manager" containerID="cri-o://68ff0fe33449e69127f550163cdbd740b1ad553c2b2004f8e9861c24ea706927" gracePeriod=30 Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.147510 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl"] Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.147945 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl" podUID="9aedd35a-04be-4f93-9d36-fff367952583" containerName="route-controller-manager" containerID="cri-o://81a539334cb30e44d1b82a90d79c4a97560b285cba33ad1ca35129131a11ed36" gracePeriod=30 Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.661087 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.711434 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc"] Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.712296 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="14d95cb2-2011-463d-9f85-133376945921" containerName="collect-profiles" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.712321 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="14d95cb2-2011-463d-9f85-133376945921" containerName="collect-profiles" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.712343 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.712352 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.712377 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9aedd35a-04be-4f93-9d36-fff367952583" containerName="route-controller-manager" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.712385 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aedd35a-04be-4f93-9d36-fff367952583" containerName="route-controller-manager" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.712494 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="9aedd35a-04be-4f93-9d36-fff367952583" containerName="route-controller-manager" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.712510 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="14d95cb2-2011-463d-9f85-133376945921" containerName="collect-profiles" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.712523 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.720611 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc"] Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.720842 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.757319 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9aedd35a-04be-4f93-9d36-fff367952583-client-ca\") pod \"9aedd35a-04be-4f93-9d36-fff367952583\" (UID: \"9aedd35a-04be-4f93-9d36-fff367952583\") " Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.757473 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lghqz\" (UniqueName: \"kubernetes.io/projected/9aedd35a-04be-4f93-9d36-fff367952583-kube-api-access-lghqz\") pod \"9aedd35a-04be-4f93-9d36-fff367952583\" (UID: \"9aedd35a-04be-4f93-9d36-fff367952583\") " Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.757513 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aedd35a-04be-4f93-9d36-fff367952583-config\") pod \"9aedd35a-04be-4f93-9d36-fff367952583\" (UID: \"9aedd35a-04be-4f93-9d36-fff367952583\") " Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.757549 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aedd35a-04be-4f93-9d36-fff367952583-serving-cert\") pod \"9aedd35a-04be-4f93-9d36-fff367952583\" (UID: \"9aedd35a-04be-4f93-9d36-fff367952583\") " Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.757569 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9aedd35a-04be-4f93-9d36-fff367952583-tmp\") pod \"9aedd35a-04be-4f93-9d36-fff367952583\" (UID: \"9aedd35a-04be-4f93-9d36-fff367952583\") " Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.758921 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9aedd35a-04be-4f93-9d36-fff367952583-config" (OuterVolumeSpecName: "config") pod "9aedd35a-04be-4f93-9d36-fff367952583" (UID: "9aedd35a-04be-4f93-9d36-fff367952583"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.758969 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9aedd35a-04be-4f93-9d36-fff367952583-tmp" (OuterVolumeSpecName: "tmp") pod "9aedd35a-04be-4f93-9d36-fff367952583" (UID: "9aedd35a-04be-4f93-9d36-fff367952583"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.759343 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9aedd35a-04be-4f93-9d36-fff367952583-client-ca" (OuterVolumeSpecName: "client-ca") pod "9aedd35a-04be-4f93-9d36-fff367952583" (UID: "9aedd35a-04be-4f93-9d36-fff367952583"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.765637 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aedd35a-04be-4f93-9d36-fff367952583-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9aedd35a-04be-4f93-9d36-fff367952583" (UID: "9aedd35a-04be-4f93-9d36-fff367952583"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.765883 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aedd35a-04be-4f93-9d36-fff367952583-kube-api-access-lghqz" (OuterVolumeSpecName: "kube-api-access-lghqz") pod "9aedd35a-04be-4f93-9d36-fff367952583" (UID: "9aedd35a-04be-4f93-9d36-fff367952583"). InnerVolumeSpecName "kube-api-access-lghqz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.836056 5107 generic.go:358] "Generic (PLEG): container finished" podID="9aedd35a-04be-4f93-9d36-fff367952583" containerID="81a539334cb30e44d1b82a90d79c4a97560b285cba33ad1ca35129131a11ed36" exitCode=0 Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.836318 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl" event={"ID":"9aedd35a-04be-4f93-9d36-fff367952583","Type":"ContainerDied","Data":"81a539334cb30e44d1b82a90d79c4a97560b285cba33ad1ca35129131a11ed36"} Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.836495 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl" event={"ID":"9aedd35a-04be-4f93-9d36-fff367952583","Type":"ContainerDied","Data":"e89caf4407a85ea2f259d0e57e17063dd0fe1dcac5d2d0e10309dde724dab347"} Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.836528 5107 scope.go:117] "RemoveContainer" containerID="81a539334cb30e44d1b82a90d79c4a97560b285cba33ad1ca35129131a11ed36" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.836659 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.839494 5107 generic.go:358] "Generic (PLEG): container finished" podID="63ee00ea-2175-4a9f-93da-56d5595a18e8" containerID="68ff0fe33449e69127f550163cdbd740b1ad553c2b2004f8e9861c24ea706927" exitCode=0 Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.839591 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" event={"ID":"63ee00ea-2175-4a9f-93da-56d5595a18e8","Type":"ContainerDied","Data":"68ff0fe33449e69127f550163cdbd740b1ad553c2b2004f8e9861c24ea706927"} Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.858629 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d406f2a3-dca4-40e4-9fd3-250122f716ec-serving-cert\") pod \"route-controller-manager-859dbb6555-hlmpc\" (UID: \"d406f2a3-dca4-40e4-9fd3-250122f716ec\") " pod="openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.858686 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d406f2a3-dca4-40e4-9fd3-250122f716ec-tmp\") pod \"route-controller-manager-859dbb6555-hlmpc\" (UID: \"d406f2a3-dca4-40e4-9fd3-250122f716ec\") " pod="openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.858712 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d406f2a3-dca4-40e4-9fd3-250122f716ec-config\") pod \"route-controller-manager-859dbb6555-hlmpc\" (UID: \"d406f2a3-dca4-40e4-9fd3-250122f716ec\") " pod="openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.858961 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d406f2a3-dca4-40e4-9fd3-250122f716ec-client-ca\") pod \"route-controller-manager-859dbb6555-hlmpc\" (UID: \"d406f2a3-dca4-40e4-9fd3-250122f716ec\") " pod="openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.859337 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxknj\" (UniqueName: \"kubernetes.io/projected/d406f2a3-dca4-40e4-9fd3-250122f716ec-kube-api-access-bxknj\") pod \"route-controller-manager-859dbb6555-hlmpc\" (UID: \"d406f2a3-dca4-40e4-9fd3-250122f716ec\") " pod="openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.859708 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lghqz\" (UniqueName: \"kubernetes.io/projected/9aedd35a-04be-4f93-9d36-fff367952583-kube-api-access-lghqz\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.859763 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aedd35a-04be-4f93-9d36-fff367952583-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.859780 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aedd35a-04be-4f93-9d36-fff367952583-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.859820 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9aedd35a-04be-4f93-9d36-fff367952583-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.859834 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9aedd35a-04be-4f93-9d36-fff367952583-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.862293 5107 scope.go:117] "RemoveContainer" containerID="81a539334cb30e44d1b82a90d79c4a97560b285cba33ad1ca35129131a11ed36" Jan 26 00:15:37 crc kubenswrapper[5107]: E0126 00:15:37.863117 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81a539334cb30e44d1b82a90d79c4a97560b285cba33ad1ca35129131a11ed36\": container with ID starting with 81a539334cb30e44d1b82a90d79c4a97560b285cba33ad1ca35129131a11ed36 not found: ID does not exist" containerID="81a539334cb30e44d1b82a90d79c4a97560b285cba33ad1ca35129131a11ed36" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.863177 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81a539334cb30e44d1b82a90d79c4a97560b285cba33ad1ca35129131a11ed36"} err="failed to get container status \"81a539334cb30e44d1b82a90d79c4a97560b285cba33ad1ca35129131a11ed36\": rpc error: code = NotFound desc = could not find container \"81a539334cb30e44d1b82a90d79c4a97560b285cba33ad1ca35129131a11ed36\": container with ID starting with 81a539334cb30e44d1b82a90d79c4a97560b285cba33ad1ca35129131a11ed36 not found: ID does not exist" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.886394 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl"] Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.897137 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55c4558f4d-r9mnl"] Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.961865 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bxknj\" (UniqueName: \"kubernetes.io/projected/d406f2a3-dca4-40e4-9fd3-250122f716ec-kube-api-access-bxknj\") pod \"route-controller-manager-859dbb6555-hlmpc\" (UID: \"d406f2a3-dca4-40e4-9fd3-250122f716ec\") " pod="openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.962012 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d406f2a3-dca4-40e4-9fd3-250122f716ec-serving-cert\") pod \"route-controller-manager-859dbb6555-hlmpc\" (UID: \"d406f2a3-dca4-40e4-9fd3-250122f716ec\") " pod="openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.962078 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d406f2a3-dca4-40e4-9fd3-250122f716ec-tmp\") pod \"route-controller-manager-859dbb6555-hlmpc\" (UID: \"d406f2a3-dca4-40e4-9fd3-250122f716ec\") " pod="openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.962111 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d406f2a3-dca4-40e4-9fd3-250122f716ec-config\") pod \"route-controller-manager-859dbb6555-hlmpc\" (UID: \"d406f2a3-dca4-40e4-9fd3-250122f716ec\") " pod="openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.962168 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d406f2a3-dca4-40e4-9fd3-250122f716ec-client-ca\") pod \"route-controller-manager-859dbb6555-hlmpc\" (UID: \"d406f2a3-dca4-40e4-9fd3-250122f716ec\") " pod="openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.962709 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d406f2a3-dca4-40e4-9fd3-250122f716ec-tmp\") pod \"route-controller-manager-859dbb6555-hlmpc\" (UID: \"d406f2a3-dca4-40e4-9fd3-250122f716ec\") " pod="openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.963488 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d406f2a3-dca4-40e4-9fd3-250122f716ec-client-ca\") pod \"route-controller-manager-859dbb6555-hlmpc\" (UID: \"d406f2a3-dca4-40e4-9fd3-250122f716ec\") " pod="openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.963535 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d406f2a3-dca4-40e4-9fd3-250122f716ec-config\") pod \"route-controller-manager-859dbb6555-hlmpc\" (UID: \"d406f2a3-dca4-40e4-9fd3-250122f716ec\") " pod="openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.967377 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d406f2a3-dca4-40e4-9fd3-250122f716ec-serving-cert\") pod \"route-controller-manager-859dbb6555-hlmpc\" (UID: \"d406f2a3-dca4-40e4-9fd3-250122f716ec\") " pod="openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc" Jan 26 00:15:37 crc kubenswrapper[5107]: I0126 00:15:37.982759 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxknj\" (UniqueName: \"kubernetes.io/projected/d406f2a3-dca4-40e4-9fd3-250122f716ec-kube-api-access-bxknj\") pod \"route-controller-manager-859dbb6555-hlmpc\" (UID: \"d406f2a3-dca4-40e4-9fd3-250122f716ec\") " pod="openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.017170 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.036682 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.049730 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7686d7c68d-v7sxh"] Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.050573 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="63ee00ea-2175-4a9f-93da-56d5595a18e8" containerName="controller-manager" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.050599 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="63ee00ea-2175-4a9f-93da-56d5595a18e8" containerName="controller-manager" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.050721 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="63ee00ea-2175-4a9f-93da-56d5595a18e8" containerName="controller-manager" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.063641 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/63ee00ea-2175-4a9f-93da-56d5595a18e8-tmp\") pod \"63ee00ea-2175-4a9f-93da-56d5595a18e8\" (UID: \"63ee00ea-2175-4a9f-93da-56d5595a18e8\") " Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.063710 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63ee00ea-2175-4a9f-93da-56d5595a18e8-serving-cert\") pod \"63ee00ea-2175-4a9f-93da-56d5595a18e8\" (UID: \"63ee00ea-2175-4a9f-93da-56d5595a18e8\") " Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.063756 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63ee00ea-2175-4a9f-93da-56d5595a18e8-client-ca\") pod \"63ee00ea-2175-4a9f-93da-56d5595a18e8\" (UID: \"63ee00ea-2175-4a9f-93da-56d5595a18e8\") " Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.063831 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h65v9\" (UniqueName: \"kubernetes.io/projected/63ee00ea-2175-4a9f-93da-56d5595a18e8-kube-api-access-h65v9\") pod \"63ee00ea-2175-4a9f-93da-56d5595a18e8\" (UID: \"63ee00ea-2175-4a9f-93da-56d5595a18e8\") " Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.063930 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63ee00ea-2175-4a9f-93da-56d5595a18e8-config\") pod \"63ee00ea-2175-4a9f-93da-56d5595a18e8\" (UID: \"63ee00ea-2175-4a9f-93da-56d5595a18e8\") " Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.063958 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/63ee00ea-2175-4a9f-93da-56d5595a18e8-proxy-ca-bundles\") pod \"63ee00ea-2175-4a9f-93da-56d5595a18e8\" (UID: \"63ee00ea-2175-4a9f-93da-56d5595a18e8\") " Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.064592 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63ee00ea-2175-4a9f-93da-56d5595a18e8-tmp" (OuterVolumeSpecName: "tmp") pod "63ee00ea-2175-4a9f-93da-56d5595a18e8" (UID: "63ee00ea-2175-4a9f-93da-56d5595a18e8"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.064985 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63ee00ea-2175-4a9f-93da-56d5595a18e8-client-ca" (OuterVolumeSpecName: "client-ca") pod "63ee00ea-2175-4a9f-93da-56d5595a18e8" (UID: "63ee00ea-2175-4a9f-93da-56d5595a18e8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.065118 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63ee00ea-2175-4a9f-93da-56d5595a18e8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "63ee00ea-2175-4a9f-93da-56d5595a18e8" (UID: "63ee00ea-2175-4a9f-93da-56d5595a18e8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.065206 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63ee00ea-2175-4a9f-93da-56d5595a18e8-config" (OuterVolumeSpecName: "config") pod "63ee00ea-2175-4a9f-93da-56d5595a18e8" (UID: "63ee00ea-2175-4a9f-93da-56d5595a18e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.069894 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63ee00ea-2175-4a9f-93da-56d5595a18e8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "63ee00ea-2175-4a9f-93da-56d5595a18e8" (UID: "63ee00ea-2175-4a9f-93da-56d5595a18e8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.069996 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63ee00ea-2175-4a9f-93da-56d5595a18e8-kube-api-access-h65v9" (OuterVolumeSpecName: "kube-api-access-h65v9") pod "63ee00ea-2175-4a9f-93da-56d5595a18e8" (UID: "63ee00ea-2175-4a9f-93da-56d5595a18e8"). InnerVolumeSpecName "kube-api-access-h65v9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.166284 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63ee00ea-2175-4a9f-93da-56d5595a18e8-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.166329 5107 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/63ee00ea-2175-4a9f-93da-56d5595a18e8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.166365 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/63ee00ea-2175-4a9f-93da-56d5595a18e8-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.166376 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63ee00ea-2175-4a9f-93da-56d5595a18e8-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.166389 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63ee00ea-2175-4a9f-93da-56d5595a18e8-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.166400 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h65v9\" (UniqueName: \"kubernetes.io/projected/63ee00ea-2175-4a9f-93da-56d5595a18e8-kube-api-access-h65v9\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.190981 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7686d7c68d-v7sxh"] Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.191202 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.199363 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aedd35a-04be-4f93-9d36-fff367952583" path="/var/lib/kubelet/pods/9aedd35a-04be-4f93-9d36-fff367952583/volumes" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.267307 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-config\") pod \"controller-manager-7686d7c68d-v7sxh\" (UID: \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\") " pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.267355 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9xvq\" (UniqueName: \"kubernetes.io/projected/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-kube-api-access-t9xvq\") pod \"controller-manager-7686d7c68d-v7sxh\" (UID: \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\") " pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.267456 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-serving-cert\") pod \"controller-manager-7686d7c68d-v7sxh\" (UID: \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\") " pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.267520 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-client-ca\") pod \"controller-manager-7686d7c68d-v7sxh\" (UID: \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\") " pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.267582 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-proxy-ca-bundles\") pod \"controller-manager-7686d7c68d-v7sxh\" (UID: \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\") " pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.267642 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-tmp\") pod \"controller-manager-7686d7c68d-v7sxh\" (UID: \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\") " pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.369861 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-tmp\") pod \"controller-manager-7686d7c68d-v7sxh\" (UID: \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\") " pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.369980 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-config\") pod \"controller-manager-7686d7c68d-v7sxh\" (UID: \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\") " pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.370009 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t9xvq\" (UniqueName: \"kubernetes.io/projected/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-kube-api-access-t9xvq\") pod \"controller-manager-7686d7c68d-v7sxh\" (UID: \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\") " pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.370149 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-serving-cert\") pod \"controller-manager-7686d7c68d-v7sxh\" (UID: \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\") " pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.370198 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-client-ca\") pod \"controller-manager-7686d7c68d-v7sxh\" (UID: \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\") " pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.370257 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-proxy-ca-bundles\") pod \"controller-manager-7686d7c68d-v7sxh\" (UID: \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\") " pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.370684 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-tmp\") pod \"controller-manager-7686d7c68d-v7sxh\" (UID: \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\") " pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.371461 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-proxy-ca-bundles\") pod \"controller-manager-7686d7c68d-v7sxh\" (UID: \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\") " pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.371481 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-client-ca\") pod \"controller-manager-7686d7c68d-v7sxh\" (UID: \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\") " pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.372104 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-config\") pod \"controller-manager-7686d7c68d-v7sxh\" (UID: \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\") " pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.381851 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-serving-cert\") pod \"controller-manager-7686d7c68d-v7sxh\" (UID: \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\") " pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.388953 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9xvq\" (UniqueName: \"kubernetes.io/projected/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-kube-api-access-t9xvq\") pod \"controller-manager-7686d7c68d-v7sxh\" (UID: \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\") " pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.507016 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.532706 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc"] Jan 26 00:15:38 crc kubenswrapper[5107]: W0126 00:15:38.536835 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd406f2a3_dca4_40e4_9fd3_250122f716ec.slice/crio-bea3908aafa50ef030362e88491a274de47d42e10b4d072d8c79c9093bf0848e WatchSource:0}: Error finding container bea3908aafa50ef030362e88491a274de47d42e10b4d072d8c79c9093bf0848e: Status 404 returned error can't find the container with id bea3908aafa50ef030362e88491a274de47d42e10b4d072d8c79c9093bf0848e Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.742858 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7686d7c68d-v7sxh"] Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.847502 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc" event={"ID":"d406f2a3-dca4-40e4-9fd3-250122f716ec","Type":"ContainerStarted","Data":"9b79ea487c6d4a421bf02e9c98b0617180e52147416f20a3840b1ce45be8af96"} Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.847577 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc" event={"ID":"d406f2a3-dca4-40e4-9fd3-250122f716ec","Type":"ContainerStarted","Data":"bea3908aafa50ef030362e88491a274de47d42e10b4d072d8c79c9093bf0848e"} Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.848243 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.848706 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" event={"ID":"a9ca2d40-8f6c-4f31-935e-e7f0ede594db","Type":"ContainerStarted","Data":"e59d0100a35742012d32fd71e578213636f1acb5778afb4a031a83436e88a500"} Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.850219 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" event={"ID":"63ee00ea-2175-4a9f-93da-56d5595a18e8","Type":"ContainerDied","Data":"db8eadccefc3ac0eaaabc67c68793bcc1eae111811a6a1b52c8698b70042e04b"} Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.850274 5107 scope.go:117] "RemoveContainer" containerID="68ff0fe33449e69127f550163cdbd740b1ad553c2b2004f8e9861c24ea706927" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.850365 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.875857 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc" podStartSLOduration=1.875818695 podStartE2EDuration="1.875818695s" podCreationTimestamp="2026-01-26 00:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:15:38.870805045 +0000 UTC m=+383.788399391" watchObservedRunningTime="2026-01-26 00:15:38.875818695 +0000 UTC m=+383.793413041" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.888718 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.895258 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb"] Jan 26 00:15:38 crc kubenswrapper[5107]: I0126 00:15:38.901718 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7fdd5659c7-5n8jb"] Jan 26 00:15:39 crc kubenswrapper[5107]: I0126 00:15:39.053416 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc" Jan 26 00:15:39 crc kubenswrapper[5107]: I0126 00:15:39.861860 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" event={"ID":"a9ca2d40-8f6c-4f31-935e-e7f0ede594db","Type":"ContainerStarted","Data":"f603219781ba44abe892c03632070ad053bc61de2f9bb24a8528512feeb80e08"} Jan 26 00:15:39 crc kubenswrapper[5107]: I0126 00:15:39.862195 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" Jan 26 00:15:39 crc kubenswrapper[5107]: I0126 00:15:39.868661 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" Jan 26 00:15:39 crc kubenswrapper[5107]: I0126 00:15:39.879168 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" podStartSLOduration=2.8791505539999998 podStartE2EDuration="2.879150554s" podCreationTimestamp="2026-01-26 00:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:15:39.879083042 +0000 UTC m=+384.796677398" watchObservedRunningTime="2026-01-26 00:15:39.879150554 +0000 UTC m=+384.796744900" Jan 26 00:15:40 crc kubenswrapper[5107]: I0126 00:15:40.000805 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 26 00:15:40 crc kubenswrapper[5107]: I0126 00:15:40.078417 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 26 00:15:40 crc kubenswrapper[5107]: I0126 00:15:40.120050 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63ee00ea-2175-4a9f-93da-56d5595a18e8" path="/var/lib/kubelet/pods/63ee00ea-2175-4a9f-93da-56d5595a18e8/volumes" Jan 26 00:15:40 crc kubenswrapper[5107]: I0126 00:15:40.523419 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 26 00:15:40 crc kubenswrapper[5107]: I0126 00:15:40.834773 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 26 00:15:43 crc kubenswrapper[5107]: I0126 00:15:43.307287 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 26 00:15:45 crc kubenswrapper[5107]: I0126 00:15:45.006521 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:15:46 crc kubenswrapper[5107]: I0126 00:15:46.649428 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 26 00:15:47 crc kubenswrapper[5107]: I0126 00:15:47.872988 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 26 00:15:49 crc kubenswrapper[5107]: I0126 00:15:49.343136 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 26 00:15:49 crc kubenswrapper[5107]: I0126 00:15:49.601274 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 26 00:15:51 crc kubenswrapper[5107]: I0126 00:15:51.319679 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 26 00:15:51 crc kubenswrapper[5107]: I0126 00:15:51.713835 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 26 00:15:52 crc kubenswrapper[5107]: I0126 00:15:52.530077 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 26 00:15:52 crc kubenswrapper[5107]: I0126 00:15:52.871534 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 26 00:15:54 crc kubenswrapper[5107]: I0126 00:15:54.913038 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 26 00:15:55 crc kubenswrapper[5107]: I0126 00:15:55.163376 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 26 00:15:56 crc kubenswrapper[5107]: I0126 00:15:56.849734 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 26 00:15:57 crc kubenswrapper[5107]: I0126 00:15:57.484698 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7686d7c68d-v7sxh"] Jan 26 00:15:57 crc kubenswrapper[5107]: I0126 00:15:57.485006 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" podUID="a9ca2d40-8f6c-4f31-935e-e7f0ede594db" containerName="controller-manager" containerID="cri-o://f603219781ba44abe892c03632070ad053bc61de2f9bb24a8528512feeb80e08" gracePeriod=30 Jan 26 00:15:57 crc kubenswrapper[5107]: I0126 00:15:57.543013 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc"] Jan 26 00:15:57 crc kubenswrapper[5107]: I0126 00:15:57.543603 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc" podUID="d406f2a3-dca4-40e4-9fd3-250122f716ec" containerName="route-controller-manager" containerID="cri-o://9b79ea487c6d4a421bf02e9c98b0617180e52147416f20a3840b1ce45be8af96" gracePeriod=30 Jan 26 00:15:57 crc kubenswrapper[5107]: E0126 00:15:57.547761 5107 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9ca2d40_8f6c_4f31_935e_e7f0ede594db.slice/crio-f603219781ba44abe892c03632070ad053bc61de2f9bb24a8528512feeb80e08.scope\": RecentStats: unable to find data in memory cache]" Jan 26 00:15:57 crc kubenswrapper[5107]: I0126 00:15:57.974946 5107 generic.go:358] "Generic (PLEG): container finished" podID="d406f2a3-dca4-40e4-9fd3-250122f716ec" containerID="9b79ea487c6d4a421bf02e9c98b0617180e52147416f20a3840b1ce45be8af96" exitCode=0 Jan 26 00:15:57 crc kubenswrapper[5107]: I0126 00:15:57.975178 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc" event={"ID":"d406f2a3-dca4-40e4-9fd3-250122f716ec","Type":"ContainerDied","Data":"9b79ea487c6d4a421bf02e9c98b0617180e52147416f20a3840b1ce45be8af96"} Jan 26 00:15:57 crc kubenswrapper[5107]: I0126 00:15:57.979138 5107 generic.go:358] "Generic (PLEG): container finished" podID="a9ca2d40-8f6c-4f31-935e-e7f0ede594db" containerID="f603219781ba44abe892c03632070ad053bc61de2f9bb24a8528512feeb80e08" exitCode=0 Jan 26 00:15:57 crc kubenswrapper[5107]: I0126 00:15:57.979348 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" event={"ID":"a9ca2d40-8f6c-4f31-935e-e7f0ede594db","Type":"ContainerDied","Data":"f603219781ba44abe892c03632070ad053bc61de2f9bb24a8528512feeb80e08"} Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.116759 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.149281 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55c4558f4d-tbltr"] Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.150219 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d406f2a3-dca4-40e4-9fd3-250122f716ec" containerName="route-controller-manager" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.150245 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="d406f2a3-dca4-40e4-9fd3-250122f716ec" containerName="route-controller-manager" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.150422 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="d406f2a3-dca4-40e4-9fd3-250122f716ec" containerName="route-controller-manager" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.159906 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-tbltr" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.163012 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55c4558f4d-tbltr"] Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.199560 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d406f2a3-dca4-40e4-9fd3-250122f716ec-client-ca\") pod \"d406f2a3-dca4-40e4-9fd3-250122f716ec\" (UID: \"d406f2a3-dca4-40e4-9fd3-250122f716ec\") " Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.199625 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxknj\" (UniqueName: \"kubernetes.io/projected/d406f2a3-dca4-40e4-9fd3-250122f716ec-kube-api-access-bxknj\") pod \"d406f2a3-dca4-40e4-9fd3-250122f716ec\" (UID: \"d406f2a3-dca4-40e4-9fd3-250122f716ec\") " Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.199679 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d406f2a3-dca4-40e4-9fd3-250122f716ec-config\") pod \"d406f2a3-dca4-40e4-9fd3-250122f716ec\" (UID: \"d406f2a3-dca4-40e4-9fd3-250122f716ec\") " Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.199723 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d406f2a3-dca4-40e4-9fd3-250122f716ec-serving-cert\") pod \"d406f2a3-dca4-40e4-9fd3-250122f716ec\" (UID: \"d406f2a3-dca4-40e4-9fd3-250122f716ec\") " Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.199832 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d406f2a3-dca4-40e4-9fd3-250122f716ec-tmp\") pod \"d406f2a3-dca4-40e4-9fd3-250122f716ec\" (UID: \"d406f2a3-dca4-40e4-9fd3-250122f716ec\") " Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.200267 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d406f2a3-dca4-40e4-9fd3-250122f716ec-tmp" (OuterVolumeSpecName: "tmp") pod "d406f2a3-dca4-40e4-9fd3-250122f716ec" (UID: "d406f2a3-dca4-40e4-9fd3-250122f716ec"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.200605 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d406f2a3-dca4-40e4-9fd3-250122f716ec-client-ca" (OuterVolumeSpecName: "client-ca") pod "d406f2a3-dca4-40e4-9fd3-250122f716ec" (UID: "d406f2a3-dca4-40e4-9fd3-250122f716ec"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.200619 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d406f2a3-dca4-40e4-9fd3-250122f716ec-config" (OuterVolumeSpecName: "config") pod "d406f2a3-dca4-40e4-9fd3-250122f716ec" (UID: "d406f2a3-dca4-40e4-9fd3-250122f716ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.209986 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d406f2a3-dca4-40e4-9fd3-250122f716ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d406f2a3-dca4-40e4-9fd3-250122f716ec" (UID: "d406f2a3-dca4-40e4-9fd3-250122f716ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.213153 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d406f2a3-dca4-40e4-9fd3-250122f716ec-kube-api-access-bxknj" (OuterVolumeSpecName: "kube-api-access-bxknj") pod "d406f2a3-dca4-40e4-9fd3-250122f716ec" (UID: "d406f2a3-dca4-40e4-9fd3-250122f716ec"). InnerVolumeSpecName "kube-api-access-bxknj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.286906 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.301798 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95586ab2-0116-4c10-9917-152e032fa736-config\") pod \"route-controller-manager-55c4558f4d-tbltr\" (UID: \"95586ab2-0116-4c10-9917-152e032fa736\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-tbltr" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.301855 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4td88\" (UniqueName: \"kubernetes.io/projected/95586ab2-0116-4c10-9917-152e032fa736-kube-api-access-4td88\") pod \"route-controller-manager-55c4558f4d-tbltr\" (UID: \"95586ab2-0116-4c10-9917-152e032fa736\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-tbltr" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.301983 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95586ab2-0116-4c10-9917-152e032fa736-serving-cert\") pod \"route-controller-manager-55c4558f4d-tbltr\" (UID: \"95586ab2-0116-4c10-9917-152e032fa736\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-tbltr" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.302160 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95586ab2-0116-4c10-9917-152e032fa736-client-ca\") pod \"route-controller-manager-55c4558f4d-tbltr\" (UID: \"95586ab2-0116-4c10-9917-152e032fa736\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-tbltr" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.302340 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/95586ab2-0116-4c10-9917-152e032fa736-tmp\") pod \"route-controller-manager-55c4558f4d-tbltr\" (UID: \"95586ab2-0116-4c10-9917-152e032fa736\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-tbltr" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.302401 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d406f2a3-dca4-40e4-9fd3-250122f716ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.302416 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d406f2a3-dca4-40e4-9fd3-250122f716ec-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.302428 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d406f2a3-dca4-40e4-9fd3-250122f716ec-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.302437 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bxknj\" (UniqueName: \"kubernetes.io/projected/d406f2a3-dca4-40e4-9fd3-250122f716ec-kube-api-access-bxknj\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.302459 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d406f2a3-dca4-40e4-9fd3-250122f716ec-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.316498 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7fdd5659c7-6nb4t"] Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.317904 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a9ca2d40-8f6c-4f31-935e-e7f0ede594db" containerName="controller-manager" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.317932 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9ca2d40-8f6c-4f31-935e-e7f0ede594db" containerName="controller-manager" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.318134 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="a9ca2d40-8f6c-4f31-935e-e7f0ede594db" containerName="controller-manager" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.325943 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fdd5659c7-6nb4t" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.381946 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7fdd5659c7-6nb4t"] Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.403681 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-client-ca\") pod \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\" (UID: \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\") " Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.403753 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-proxy-ca-bundles\") pod \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\" (UID: \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\") " Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.403832 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-tmp\") pod \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\" (UID: \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\") " Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.403873 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-serving-cert\") pod \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\" (UID: \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\") " Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.404553 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-config\") pod \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\" (UID: \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\") " Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.404552 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-client-ca" (OuterVolumeSpecName: "client-ca") pod "a9ca2d40-8f6c-4f31-935e-e7f0ede594db" (UID: "a9ca2d40-8f6c-4f31-935e-e7f0ede594db"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.404662 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a9ca2d40-8f6c-4f31-935e-e7f0ede594db" (UID: "a9ca2d40-8f6c-4f31-935e-e7f0ede594db"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.404701 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9xvq\" (UniqueName: \"kubernetes.io/projected/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-kube-api-access-t9xvq\") pod \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\" (UID: \"a9ca2d40-8f6c-4f31-935e-e7f0ede594db\") " Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.405098 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-config" (OuterVolumeSpecName: "config") pod "a9ca2d40-8f6c-4f31-935e-e7f0ede594db" (UID: "a9ca2d40-8f6c-4f31-935e-e7f0ede594db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.405190 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95586ab2-0116-4c10-9917-152e032fa736-client-ca\") pod \"route-controller-manager-55c4558f4d-tbltr\" (UID: \"95586ab2-0116-4c10-9917-152e032fa736\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-tbltr" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.405255 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fbaed05b-f302-4ce6-b1ec-380e75d87a67-tmp\") pod \"controller-manager-7fdd5659c7-6nb4t\" (UID: \"fbaed05b-f302-4ce6-b1ec-380e75d87a67\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-6nb4t" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.405298 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbaed05b-f302-4ce6-b1ec-380e75d87a67-serving-cert\") pod \"controller-manager-7fdd5659c7-6nb4t\" (UID: \"fbaed05b-f302-4ce6-b1ec-380e75d87a67\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-6nb4t" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.405626 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fbaed05b-f302-4ce6-b1ec-380e75d87a67-client-ca\") pod \"controller-manager-7fdd5659c7-6nb4t\" (UID: \"fbaed05b-f302-4ce6-b1ec-380e75d87a67\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-6nb4t" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.405688 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/95586ab2-0116-4c10-9917-152e032fa736-tmp\") pod \"route-controller-manager-55c4558f4d-tbltr\" (UID: \"95586ab2-0116-4c10-9917-152e032fa736\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-tbltr" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.405776 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95586ab2-0116-4c10-9917-152e032fa736-config\") pod \"route-controller-manager-55c4558f4d-tbltr\" (UID: \"95586ab2-0116-4c10-9917-152e032fa736\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-tbltr" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.405852 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4td88\" (UniqueName: \"kubernetes.io/projected/95586ab2-0116-4c10-9917-152e032fa736-kube-api-access-4td88\") pod \"route-controller-manager-55c4558f4d-tbltr\" (UID: \"95586ab2-0116-4c10-9917-152e032fa736\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-tbltr" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.405922 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbaed05b-f302-4ce6-b1ec-380e75d87a67-config\") pod \"controller-manager-7fdd5659c7-6nb4t\" (UID: \"fbaed05b-f302-4ce6-b1ec-380e75d87a67\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-6nb4t" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.405949 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7kdv\" (UniqueName: \"kubernetes.io/projected/fbaed05b-f302-4ce6-b1ec-380e75d87a67-kube-api-access-t7kdv\") pod \"controller-manager-7fdd5659c7-6nb4t\" (UID: \"fbaed05b-f302-4ce6-b1ec-380e75d87a67\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-6nb4t" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.406012 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95586ab2-0116-4c10-9917-152e032fa736-serving-cert\") pod \"route-controller-manager-55c4558f4d-tbltr\" (UID: \"95586ab2-0116-4c10-9917-152e032fa736\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-tbltr" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.406118 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fbaed05b-f302-4ce6-b1ec-380e75d87a67-proxy-ca-bundles\") pod \"controller-manager-7fdd5659c7-6nb4t\" (UID: \"fbaed05b-f302-4ce6-b1ec-380e75d87a67\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-6nb4t" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.406011 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95586ab2-0116-4c10-9917-152e032fa736-client-ca\") pod \"route-controller-manager-55c4558f4d-tbltr\" (UID: \"95586ab2-0116-4c10-9917-152e032fa736\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-tbltr" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.406461 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.406480 5107 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.406494 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.406674 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/95586ab2-0116-4c10-9917-152e032fa736-tmp\") pod \"route-controller-manager-55c4558f4d-tbltr\" (UID: \"95586ab2-0116-4c10-9917-152e032fa736\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-tbltr" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.407142 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95586ab2-0116-4c10-9917-152e032fa736-config\") pod \"route-controller-manager-55c4558f4d-tbltr\" (UID: \"95586ab2-0116-4c10-9917-152e032fa736\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-tbltr" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.404662 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-tmp" (OuterVolumeSpecName: "tmp") pod "a9ca2d40-8f6c-4f31-935e-e7f0ede594db" (UID: "a9ca2d40-8f6c-4f31-935e-e7f0ede594db"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.408307 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a9ca2d40-8f6c-4f31-935e-e7f0ede594db" (UID: "a9ca2d40-8f6c-4f31-935e-e7f0ede594db"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.408588 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-kube-api-access-t9xvq" (OuterVolumeSpecName: "kube-api-access-t9xvq") pod "a9ca2d40-8f6c-4f31-935e-e7f0ede594db" (UID: "a9ca2d40-8f6c-4f31-935e-e7f0ede594db"). InnerVolumeSpecName "kube-api-access-t9xvq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.410743 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95586ab2-0116-4c10-9917-152e032fa736-serving-cert\") pod \"route-controller-manager-55c4558f4d-tbltr\" (UID: \"95586ab2-0116-4c10-9917-152e032fa736\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-tbltr" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.424357 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4td88\" (UniqueName: \"kubernetes.io/projected/95586ab2-0116-4c10-9917-152e032fa736-kube-api-access-4td88\") pod \"route-controller-manager-55c4558f4d-tbltr\" (UID: \"95586ab2-0116-4c10-9917-152e032fa736\") " pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-tbltr" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.477133 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-tbltr" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.507315 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbaed05b-f302-4ce6-b1ec-380e75d87a67-serving-cert\") pod \"controller-manager-7fdd5659c7-6nb4t\" (UID: \"fbaed05b-f302-4ce6-b1ec-380e75d87a67\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-6nb4t" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.507415 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fbaed05b-f302-4ce6-b1ec-380e75d87a67-client-ca\") pod \"controller-manager-7fdd5659c7-6nb4t\" (UID: \"fbaed05b-f302-4ce6-b1ec-380e75d87a67\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-6nb4t" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.507464 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbaed05b-f302-4ce6-b1ec-380e75d87a67-config\") pod \"controller-manager-7fdd5659c7-6nb4t\" (UID: \"fbaed05b-f302-4ce6-b1ec-380e75d87a67\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-6nb4t" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.507490 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t7kdv\" (UniqueName: \"kubernetes.io/projected/fbaed05b-f302-4ce6-b1ec-380e75d87a67-kube-api-access-t7kdv\") pod \"controller-manager-7fdd5659c7-6nb4t\" (UID: \"fbaed05b-f302-4ce6-b1ec-380e75d87a67\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-6nb4t" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.507536 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fbaed05b-f302-4ce6-b1ec-380e75d87a67-proxy-ca-bundles\") pod \"controller-manager-7fdd5659c7-6nb4t\" (UID: \"fbaed05b-f302-4ce6-b1ec-380e75d87a67\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-6nb4t" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.507582 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fbaed05b-f302-4ce6-b1ec-380e75d87a67-tmp\") pod \"controller-manager-7fdd5659c7-6nb4t\" (UID: \"fbaed05b-f302-4ce6-b1ec-380e75d87a67\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-6nb4t" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.507628 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.507643 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.507657 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t9xvq\" (UniqueName: \"kubernetes.io/projected/a9ca2d40-8f6c-4f31-935e-e7f0ede594db-kube-api-access-t9xvq\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.508502 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fbaed05b-f302-4ce6-b1ec-380e75d87a67-tmp\") pod \"controller-manager-7fdd5659c7-6nb4t\" (UID: \"fbaed05b-f302-4ce6-b1ec-380e75d87a67\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-6nb4t" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.509254 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbaed05b-f302-4ce6-b1ec-380e75d87a67-config\") pod \"controller-manager-7fdd5659c7-6nb4t\" (UID: \"fbaed05b-f302-4ce6-b1ec-380e75d87a67\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-6nb4t" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.509640 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fbaed05b-f302-4ce6-b1ec-380e75d87a67-proxy-ca-bundles\") pod \"controller-manager-7fdd5659c7-6nb4t\" (UID: \"fbaed05b-f302-4ce6-b1ec-380e75d87a67\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-6nb4t" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.509675 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fbaed05b-f302-4ce6-b1ec-380e75d87a67-client-ca\") pod \"controller-manager-7fdd5659c7-6nb4t\" (UID: \"fbaed05b-f302-4ce6-b1ec-380e75d87a67\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-6nb4t" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.511930 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbaed05b-f302-4ce6-b1ec-380e75d87a67-serving-cert\") pod \"controller-manager-7fdd5659c7-6nb4t\" (UID: \"fbaed05b-f302-4ce6-b1ec-380e75d87a67\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-6nb4t" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.527187 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7kdv\" (UniqueName: \"kubernetes.io/projected/fbaed05b-f302-4ce6-b1ec-380e75d87a67-kube-api-access-t7kdv\") pod \"controller-manager-7fdd5659c7-6nb4t\" (UID: \"fbaed05b-f302-4ce6-b1ec-380e75d87a67\") " pod="openshift-controller-manager/controller-manager-7fdd5659c7-6nb4t" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.642272 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fdd5659c7-6nb4t" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.669396 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55c4558f4d-tbltr"] Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.986151 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.986180 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc" event={"ID":"d406f2a3-dca4-40e4-9fd3-250122f716ec","Type":"ContainerDied","Data":"bea3908aafa50ef030362e88491a274de47d42e10b4d072d8c79c9093bf0848e"} Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.986704 5107 scope.go:117] "RemoveContainer" containerID="9b79ea487c6d4a421bf02e9c98b0617180e52147416f20a3840b1ce45be8af96" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.987547 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" event={"ID":"a9ca2d40-8f6c-4f31-935e-e7f0ede594db","Type":"ContainerDied","Data":"e59d0100a35742012d32fd71e578213636f1acb5778afb4a031a83436e88a500"} Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.987594 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7686d7c68d-v7sxh" Jan 26 00:15:58 crc kubenswrapper[5107]: I0126 00:15:58.988810 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-tbltr" event={"ID":"95586ab2-0116-4c10-9917-152e032fa736","Type":"ContainerStarted","Data":"026ae1673f725bc7754a61a033babbd3cdf052897377cfa4f46395e4429465fd"} Jan 26 00:15:59 crc kubenswrapper[5107]: I0126 00:15:59.002675 5107 scope.go:117] "RemoveContainer" containerID="f603219781ba44abe892c03632070ad053bc61de2f9bb24a8528512feeb80e08" Jan 26 00:15:59 crc kubenswrapper[5107]: I0126 00:15:59.020224 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc"] Jan 26 00:15:59 crc kubenswrapper[5107]: I0126 00:15:59.032751 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-859dbb6555-hlmpc"] Jan 26 00:15:59 crc kubenswrapper[5107]: I0126 00:15:59.049659 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7686d7c68d-v7sxh"] Jan 26 00:15:59 crc kubenswrapper[5107]: I0126 00:15:59.053447 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7686d7c68d-v7sxh"] Jan 26 00:15:59 crc kubenswrapper[5107]: I0126 00:15:59.081220 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7fdd5659c7-6nb4t"] Jan 26 00:15:59 crc kubenswrapper[5107]: W0126 00:15:59.089104 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbaed05b_f302_4ce6_b1ec_380e75d87a67.slice/crio-44186306fcc9eb7f2e7926bc3f09c2f671485ff007041b4dafd1999d72000037 WatchSource:0}: Error finding container 44186306fcc9eb7f2e7926bc3f09c2f671485ff007041b4dafd1999d72000037: Status 404 returned error can't find the container with id 44186306fcc9eb7f2e7926bc3f09c2f671485ff007041b4dafd1999d72000037 Jan 26 00:15:59 crc kubenswrapper[5107]: I0126 00:15:59.997368 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fdd5659c7-6nb4t" event={"ID":"fbaed05b-f302-4ce6-b1ec-380e75d87a67","Type":"ContainerStarted","Data":"5c47b19df5bbf67402c56fd58f615f3d8eae4c8af60e8a7448b2537a61e6d3d1"} Jan 26 00:15:59 crc kubenswrapper[5107]: I0126 00:15:59.997847 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fdd5659c7-6nb4t" event={"ID":"fbaed05b-f302-4ce6-b1ec-380e75d87a67","Type":"ContainerStarted","Data":"44186306fcc9eb7f2e7926bc3f09c2f671485ff007041b4dafd1999d72000037"} Jan 26 00:15:59 crc kubenswrapper[5107]: I0126 00:15:59.997914 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-7fdd5659c7-6nb4t" Jan 26 00:16:00 crc kubenswrapper[5107]: I0126 00:16:00.003820 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-tbltr" event={"ID":"95586ab2-0116-4c10-9917-152e032fa736","Type":"ContainerStarted","Data":"e8e3624a210612400018a7050148be55996f7a131d02d5d0a782f9c47fe787c0"} Jan 26 00:16:00 crc kubenswrapper[5107]: I0126 00:16:00.004139 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-tbltr" Jan 26 00:16:00 crc kubenswrapper[5107]: I0126 00:16:00.009910 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-tbltr" Jan 26 00:16:00 crc kubenswrapper[5107]: I0126 00:16:00.015995 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7fdd5659c7-6nb4t" podStartSLOduration=3.015977714 podStartE2EDuration="3.015977714s" podCreationTimestamp="2026-01-26 00:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:16:00.015576162 +0000 UTC m=+404.933170528" watchObservedRunningTime="2026-01-26 00:16:00.015977714 +0000 UTC m=+404.933572080" Jan 26 00:16:00 crc kubenswrapper[5107]: I0126 00:16:00.125499 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9ca2d40-8f6c-4f31-935e-e7f0ede594db" path="/var/lib/kubelet/pods/a9ca2d40-8f6c-4f31-935e-e7f0ede594db/volumes" Jan 26 00:16:00 crc kubenswrapper[5107]: I0126 00:16:00.126935 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d406f2a3-dca4-40e4-9fd3-250122f716ec" path="/var/lib/kubelet/pods/d406f2a3-dca4-40e4-9fd3-250122f716ec/volumes" Jan 26 00:16:00 crc kubenswrapper[5107]: I0126 00:16:00.770253 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7fdd5659c7-6nb4t" Jan 26 00:16:00 crc kubenswrapper[5107]: I0126 00:16:00.786320 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-55c4558f4d-tbltr" podStartSLOduration=3.786277937 podStartE2EDuration="3.786277937s" podCreationTimestamp="2026-01-26 00:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:16:00.03847555 +0000 UTC m=+404.956069906" watchObservedRunningTime="2026-01-26 00:16:00.786277937 +0000 UTC m=+405.703872283" Jan 26 00:16:02 crc kubenswrapper[5107]: I0126 00:16:02.603610 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 26 00:16:09 crc kubenswrapper[5107]: I0126 00:16:09.897768 5107 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 00:16:14 crc kubenswrapper[5107]: I0126 00:16:14.521786 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gbddn"] Jan 26 00:16:14 crc kubenswrapper[5107]: I0126 00:16:14.523079 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gbddn" podUID="c0c7bec4-aeda-4946-9599-726d61c41d93" containerName="registry-server" containerID="cri-o://4766a318aa053ca1b5b81553962d132be2196a9749a08ae6d6c74ccc97fc5675" gracePeriod=30 Jan 26 00:16:14 crc kubenswrapper[5107]: I0126 00:16:14.538355 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bh5dd"] Jan 26 00:16:14 crc kubenswrapper[5107]: I0126 00:16:14.539093 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bh5dd" podUID="d71d7360-3eef-4260-b288-7fc9f8d6fecc" containerName="registry-server" containerID="cri-o://c17e5e951e3f10c30c7488ada651ed12e6f0a0893b9447b8b18ae6da7137ed70" gracePeriod=30 Jan 26 00:16:14 crc kubenswrapper[5107]: I0126 00:16:14.554105 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-59jn5"] Jan 26 00:16:14 crc kubenswrapper[5107]: I0126 00:16:14.554436 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" podUID="d93df320-4284-49f0-b63d-ba8a86943f2e" containerName="marketplace-operator" containerID="cri-o://ca9dec53aa7c93c365f90b547feb255a966e3d662679cdaef2eab8637f7f82e9" gracePeriod=30 Jan 26 00:16:14 crc kubenswrapper[5107]: I0126 00:16:14.569164 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j26gs"] Jan 26 00:16:14 crc kubenswrapper[5107]: I0126 00:16:14.569734 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j26gs" podUID="b2f8e393-1ed3-4475-bd0b-e0af8867a07a" containerName="registry-server" containerID="cri-o://b112e2d375a4a874f3e1836260e9227648dfec06b82235a8f7ca11c00e5377b1" gracePeriod=30 Jan 26 00:16:14 crc kubenswrapper[5107]: I0126 00:16:14.587834 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2chhv"] Jan 26 00:16:14 crc kubenswrapper[5107]: I0126 00:16:14.588252 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2chhv" podUID="1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741" containerName="registry-server" containerID="cri-o://4142fce0a9fb522dddaa4d0b5f63e9df61e441757754d34465acd838f437a033" gracePeriod=30 Jan 26 00:16:14 crc kubenswrapper[5107]: I0126 00:16:14.592828 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-z8mjk"] Jan 26 00:16:14 crc kubenswrapper[5107]: I0126 00:16:14.811050 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-z8mjk"] Jan 26 00:16:14 crc kubenswrapper[5107]: I0126 00:16:14.811233 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-z8mjk" Jan 26 00:16:14 crc kubenswrapper[5107]: I0126 00:16:14.936385 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdb4m\" (UniqueName: \"kubernetes.io/projected/f9693a56-8c67-49d4-86ef-00efbe7882a5-kube-api-access-cdb4m\") pod \"marketplace-operator-547dbd544d-z8mjk\" (UID: \"f9693a56-8c67-49d4-86ef-00efbe7882a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z8mjk" Jan 26 00:16:14 crc kubenswrapper[5107]: I0126 00:16:14.936442 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9693a56-8c67-49d4-86ef-00efbe7882a5-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-z8mjk\" (UID: \"f9693a56-8c67-49d4-86ef-00efbe7882a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z8mjk" Jan 26 00:16:14 crc kubenswrapper[5107]: I0126 00:16:14.936583 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f9693a56-8c67-49d4-86ef-00efbe7882a5-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-z8mjk\" (UID: \"f9693a56-8c67-49d4-86ef-00efbe7882a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z8mjk" Jan 26 00:16:14 crc kubenswrapper[5107]: I0126 00:16:14.936690 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f9693a56-8c67-49d4-86ef-00efbe7882a5-tmp\") pod \"marketplace-operator-547dbd544d-z8mjk\" (UID: \"f9693a56-8c67-49d4-86ef-00efbe7882a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z8mjk" Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.038438 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f9693a56-8c67-49d4-86ef-00efbe7882a5-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-z8mjk\" (UID: \"f9693a56-8c67-49d4-86ef-00efbe7882a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z8mjk" Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.038527 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f9693a56-8c67-49d4-86ef-00efbe7882a5-tmp\") pod \"marketplace-operator-547dbd544d-z8mjk\" (UID: \"f9693a56-8c67-49d4-86ef-00efbe7882a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z8mjk" Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.038587 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cdb4m\" (UniqueName: \"kubernetes.io/projected/f9693a56-8c67-49d4-86ef-00efbe7882a5-kube-api-access-cdb4m\") pod \"marketplace-operator-547dbd544d-z8mjk\" (UID: \"f9693a56-8c67-49d4-86ef-00efbe7882a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z8mjk" Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.038620 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9693a56-8c67-49d4-86ef-00efbe7882a5-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-z8mjk\" (UID: \"f9693a56-8c67-49d4-86ef-00efbe7882a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z8mjk" Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.039622 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f9693a56-8c67-49d4-86ef-00efbe7882a5-tmp\") pod \"marketplace-operator-547dbd544d-z8mjk\" (UID: \"f9693a56-8c67-49d4-86ef-00efbe7882a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z8mjk" Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.040504 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9693a56-8c67-49d4-86ef-00efbe7882a5-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-z8mjk\" (UID: \"f9693a56-8c67-49d4-86ef-00efbe7882a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z8mjk" Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.048056 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f9693a56-8c67-49d4-86ef-00efbe7882a5-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-z8mjk\" (UID: \"f9693a56-8c67-49d4-86ef-00efbe7882a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z8mjk" Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.058461 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdb4m\" (UniqueName: \"kubernetes.io/projected/f9693a56-8c67-49d4-86ef-00efbe7882a5-kube-api-access-cdb4m\") pod \"marketplace-operator-547dbd544d-z8mjk\" (UID: \"f9693a56-8c67-49d4-86ef-00efbe7882a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z8mjk" Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.098396 5107 generic.go:358] "Generic (PLEG): container finished" podID="b2f8e393-1ed3-4475-bd0b-e0af8867a07a" containerID="b112e2d375a4a874f3e1836260e9227648dfec06b82235a8f7ca11c00e5377b1" exitCode=0 Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.098472 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j26gs" event={"ID":"b2f8e393-1ed3-4475-bd0b-e0af8867a07a","Type":"ContainerDied","Data":"b112e2d375a4a874f3e1836260e9227648dfec06b82235a8f7ca11c00e5377b1"} Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.100521 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-59jn5_d93df320-4284-49f0-b63d-ba8a86943f2e/marketplace-operator/1.log" Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.100560 5107 generic.go:358] "Generic (PLEG): container finished" podID="d93df320-4284-49f0-b63d-ba8a86943f2e" containerID="ca9dec53aa7c93c365f90b547feb255a966e3d662679cdaef2eab8637f7f82e9" exitCode=0 Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.100654 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" event={"ID":"d93df320-4284-49f0-b63d-ba8a86943f2e","Type":"ContainerDied","Data":"ca9dec53aa7c93c365f90b547feb255a966e3d662679cdaef2eab8637f7f82e9"} Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.100758 5107 scope.go:117] "RemoveContainer" containerID="e38dfb80bc684842805a934db57ac36d88492a6032b81a4bf3c4665a02c5918a" Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.102773 5107 generic.go:358] "Generic (PLEG): container finished" podID="c0c7bec4-aeda-4946-9599-726d61c41d93" containerID="4766a318aa053ca1b5b81553962d132be2196a9749a08ae6d6c74ccc97fc5675" exitCode=0 Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.102934 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbddn" event={"ID":"c0c7bec4-aeda-4946-9599-726d61c41d93","Type":"ContainerDied","Data":"4766a318aa053ca1b5b81553962d132be2196a9749a08ae6d6c74ccc97fc5675"} Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.105308 5107 generic.go:358] "Generic (PLEG): container finished" podID="d71d7360-3eef-4260-b288-7fc9f8d6fecc" containerID="c17e5e951e3f10c30c7488ada651ed12e6f0a0893b9447b8b18ae6da7137ed70" exitCode=0 Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.105388 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bh5dd" event={"ID":"d71d7360-3eef-4260-b288-7fc9f8d6fecc","Type":"ContainerDied","Data":"c17e5e951e3f10c30c7488ada651ed12e6f0a0893b9447b8b18ae6da7137ed70"} Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.107667 5107 generic.go:358] "Generic (PLEG): container finished" podID="1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741" containerID="4142fce0a9fb522dddaa4d0b5f63e9df61e441757754d34465acd838f437a033" exitCode=0 Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.107711 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2chhv" event={"ID":"1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741","Type":"ContainerDied","Data":"4142fce0a9fb522dddaa4d0b5f63e9df61e441757754d34465acd838f437a033"} Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.330199 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-z8mjk" Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.473642 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gbddn" Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.544724 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2kg6\" (UniqueName: \"kubernetes.io/projected/c0c7bec4-aeda-4946-9599-726d61c41d93-kube-api-access-c2kg6\") pod \"c0c7bec4-aeda-4946-9599-726d61c41d93\" (UID: \"c0c7bec4-aeda-4946-9599-726d61c41d93\") " Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.544859 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0c7bec4-aeda-4946-9599-726d61c41d93-utilities\") pod \"c0c7bec4-aeda-4946-9599-726d61c41d93\" (UID: \"c0c7bec4-aeda-4946-9599-726d61c41d93\") " Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.544906 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0c7bec4-aeda-4946-9599-726d61c41d93-catalog-content\") pod \"c0c7bec4-aeda-4946-9599-726d61c41d93\" (UID: \"c0c7bec4-aeda-4946-9599-726d61c41d93\") " Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.547819 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0c7bec4-aeda-4946-9599-726d61c41d93-utilities" (OuterVolumeSpecName: "utilities") pod "c0c7bec4-aeda-4946-9599-726d61c41d93" (UID: "c0c7bec4-aeda-4946-9599-726d61c41d93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.555700 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0c7bec4-aeda-4946-9599-726d61c41d93-kube-api-access-c2kg6" (OuterVolumeSpecName: "kube-api-access-c2kg6") pod "c0c7bec4-aeda-4946-9599-726d61c41d93" (UID: "c0c7bec4-aeda-4946-9599-726d61c41d93"). InnerVolumeSpecName "kube-api-access-c2kg6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.578615 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0c7bec4-aeda-4946-9599-726d61c41d93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0c7bec4-aeda-4946-9599-726d61c41d93" (UID: "c0c7bec4-aeda-4946-9599-726d61c41d93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.646638 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c2kg6\" (UniqueName: \"kubernetes.io/projected/c0c7bec4-aeda-4946-9599-726d61c41d93-kube-api-access-c2kg6\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.647018 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0c7bec4-aeda-4946-9599-726d61c41d93-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.647107 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0c7bec4-aeda-4946-9599-726d61c41d93-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.714938 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bh5dd" Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.925336 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqbh2\" (UniqueName: \"kubernetes.io/projected/d71d7360-3eef-4260-b288-7fc9f8d6fecc-kube-api-access-xqbh2\") pod \"d71d7360-3eef-4260-b288-7fc9f8d6fecc\" (UID: \"d71d7360-3eef-4260-b288-7fc9f8d6fecc\") " Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.925416 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d71d7360-3eef-4260-b288-7fc9f8d6fecc-utilities\") pod \"d71d7360-3eef-4260-b288-7fc9f8d6fecc\" (UID: \"d71d7360-3eef-4260-b288-7fc9f8d6fecc\") " Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.925518 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d71d7360-3eef-4260-b288-7fc9f8d6fecc-catalog-content\") pod \"d71d7360-3eef-4260-b288-7fc9f8d6fecc\" (UID: \"d71d7360-3eef-4260-b288-7fc9f8d6fecc\") " Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.928069 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d71d7360-3eef-4260-b288-7fc9f8d6fecc-utilities" (OuterVolumeSpecName: "utilities") pod "d71d7360-3eef-4260-b288-7fc9f8d6fecc" (UID: "d71d7360-3eef-4260-b288-7fc9f8d6fecc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.940196 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d71d7360-3eef-4260-b288-7fc9f8d6fecc-kube-api-access-xqbh2" (OuterVolumeSpecName: "kube-api-access-xqbh2") pod "d71d7360-3eef-4260-b288-7fc9f8d6fecc" (UID: "d71d7360-3eef-4260-b288-7fc9f8d6fecc"). InnerVolumeSpecName "kube-api-access-xqbh2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:16:15 crc kubenswrapper[5107]: E0126 00:16:15.990180 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4142fce0a9fb522dddaa4d0b5f63e9df61e441757754d34465acd838f437a033 is running failed: container process not found" containerID="4142fce0a9fb522dddaa4d0b5f63e9df61e441757754d34465acd838f437a033" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:16:15 crc kubenswrapper[5107]: E0126 00:16:15.994379 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4142fce0a9fb522dddaa4d0b5f63e9df61e441757754d34465acd838f437a033 is running failed: container process not found" containerID="4142fce0a9fb522dddaa4d0b5f63e9df61e441757754d34465acd838f437a033" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:16:15 crc kubenswrapper[5107]: E0126 00:16:15.995020 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4142fce0a9fb522dddaa4d0b5f63e9df61e441757754d34465acd838f437a033 is running failed: container process not found" containerID="4142fce0a9fb522dddaa4d0b5f63e9df61e441757754d34465acd838f437a033" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:16:15 crc kubenswrapper[5107]: E0126 00:16:15.995170 5107 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4142fce0a9fb522dddaa4d0b5f63e9df61e441757754d34465acd838f437a033 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-2chhv" podUID="1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741" containerName="registry-server" probeResult="unknown" Jan 26 00:16:15 crc kubenswrapper[5107]: I0126 00:16:15.998901 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-z8mjk"] Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.020554 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d71d7360-3eef-4260-b288-7fc9f8d6fecc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d71d7360-3eef-4260-b288-7fc9f8d6fecc" (UID: "d71d7360-3eef-4260-b288-7fc9f8d6fecc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.021775 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.026374 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d93df320-4284-49f0-b63d-ba8a86943f2e-tmp\") pod \"d93df320-4284-49f0-b63d-ba8a86943f2e\" (UID: \"d93df320-4284-49f0-b63d-ba8a86943f2e\") " Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.026538 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d93df320-4284-49f0-b63d-ba8a86943f2e-marketplace-trusted-ca\") pod \"d93df320-4284-49f0-b63d-ba8a86943f2e\" (UID: \"d93df320-4284-49f0-b63d-ba8a86943f2e\") " Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.026609 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qzl7\" (UniqueName: \"kubernetes.io/projected/d93df320-4284-49f0-b63d-ba8a86943f2e-kube-api-access-7qzl7\") pod \"d93df320-4284-49f0-b63d-ba8a86943f2e\" (UID: \"d93df320-4284-49f0-b63d-ba8a86943f2e\") " Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.026668 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d93df320-4284-49f0-b63d-ba8a86943f2e-marketplace-operator-metrics\") pod \"d93df320-4284-49f0-b63d-ba8a86943f2e\" (UID: \"d93df320-4284-49f0-b63d-ba8a86943f2e\") " Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.027051 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d93df320-4284-49f0-b63d-ba8a86943f2e-tmp" (OuterVolumeSpecName: "tmp") pod "d93df320-4284-49f0-b63d-ba8a86943f2e" (UID: "d93df320-4284-49f0-b63d-ba8a86943f2e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.027605 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d93df320-4284-49f0-b63d-ba8a86943f2e-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "d93df320-4284-49f0-b63d-ba8a86943f2e" (UID: "d93df320-4284-49f0-b63d-ba8a86943f2e"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.028179 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d71d7360-3eef-4260-b288-7fc9f8d6fecc-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.028231 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d71d7360-3eef-4260-b288-7fc9f8d6fecc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.028247 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xqbh2\" (UniqueName: \"kubernetes.io/projected/d71d7360-3eef-4260-b288-7fc9f8d6fecc-kube-api-access-xqbh2\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.029125 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j26gs" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.033970 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d93df320-4284-49f0-b63d-ba8a86943f2e-kube-api-access-7qzl7" (OuterVolumeSpecName: "kube-api-access-7qzl7") pod "d93df320-4284-49f0-b63d-ba8a86943f2e" (UID: "d93df320-4284-49f0-b63d-ba8a86943f2e"). InnerVolumeSpecName "kube-api-access-7qzl7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.034057 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d93df320-4284-49f0-b63d-ba8a86943f2e-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "d93df320-4284-49f0-b63d-ba8a86943f2e" (UID: "d93df320-4284-49f0-b63d-ba8a86943f2e"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.038751 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2chhv" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.149298 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741-utilities\") pod \"1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741\" (UID: \"1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741\") " Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.149438 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6c5t\" (UniqueName: \"kubernetes.io/projected/1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741-kube-api-access-k6c5t\") pod \"1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741\" (UID: \"1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741\") " Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.149519 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2f8e393-1ed3-4475-bd0b-e0af8867a07a-catalog-content\") pod \"b2f8e393-1ed3-4475-bd0b-e0af8867a07a\" (UID: \"b2f8e393-1ed3-4475-bd0b-e0af8867a07a\") " Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.149762 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2f8e393-1ed3-4475-bd0b-e0af8867a07a-utilities\") pod \"b2f8e393-1ed3-4475-bd0b-e0af8867a07a\" (UID: \"b2f8e393-1ed3-4475-bd0b-e0af8867a07a\") " Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.149840 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrxjf\" (UniqueName: \"kubernetes.io/projected/b2f8e393-1ed3-4475-bd0b-e0af8867a07a-kube-api-access-zrxjf\") pod \"b2f8e393-1ed3-4475-bd0b-e0af8867a07a\" (UID: \"b2f8e393-1ed3-4475-bd0b-e0af8867a07a\") " Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.149923 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741-catalog-content\") pod \"1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741\" (UID: \"1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741\") " Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.150519 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d93df320-4284-49f0-b63d-ba8a86943f2e-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.150549 5107 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d93df320-4284-49f0-b63d-ba8a86943f2e-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.150572 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7qzl7\" (UniqueName: \"kubernetes.io/projected/d93df320-4284-49f0-b63d-ba8a86943f2e-kube-api-access-7qzl7\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.150593 5107 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d93df320-4284-49f0-b63d-ba8a86943f2e-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.172446 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2f8e393-1ed3-4475-bd0b-e0af8867a07a-utilities" (OuterVolumeSpecName: "utilities") pod "b2f8e393-1ed3-4475-bd0b-e0af8867a07a" (UID: "b2f8e393-1ed3-4475-bd0b-e0af8867a07a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.174287 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741-utilities" (OuterVolumeSpecName: "utilities") pod "1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741" (UID: "1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.191897 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2f8e393-1ed3-4475-bd0b-e0af8867a07a-kube-api-access-zrxjf" (OuterVolumeSpecName: "kube-api-access-zrxjf") pod "b2f8e393-1ed3-4475-bd0b-e0af8867a07a" (UID: "b2f8e393-1ed3-4475-bd0b-e0af8867a07a"). InnerVolumeSpecName "kube-api-access-zrxjf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.195183 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741-kube-api-access-k6c5t" (OuterVolumeSpecName: "kube-api-access-k6c5t") pod "1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741" (UID: "1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741"). InnerVolumeSpecName "kube-api-access-k6c5t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.231927 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2f8e393-1ed3-4475-bd0b-e0af8867a07a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2f8e393-1ed3-4475-bd0b-e0af8867a07a" (UID: "b2f8e393-1ed3-4475-bd0b-e0af8867a07a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.271469 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gbddn" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.275061 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2f8e393-1ed3-4475-bd0b-e0af8867a07a-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.275092 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zrxjf\" (UniqueName: \"kubernetes.io/projected/b2f8e393-1ed3-4475-bd0b-e0af8867a07a-kube-api-access-zrxjf\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.275106 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.275120 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k6c5t\" (UniqueName: \"kubernetes.io/projected/1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741-kube-api-access-k6c5t\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.275135 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2f8e393-1ed3-4475-bd0b-e0af8867a07a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.283425 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bh5dd" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.315330 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2chhv" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.325159 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j26gs" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.327555 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbddn" event={"ID":"c0c7bec4-aeda-4946-9599-726d61c41d93","Type":"ContainerDied","Data":"5d28069fcf50443e0e1eef8f7bcfd4529979305729d9ab516aa31881b651aa56"} Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.327606 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-z8mjk" event={"ID":"f9693a56-8c67-49d4-86ef-00efbe7882a5","Type":"ContainerStarted","Data":"2de1d0a2890fe006cd11793a5980600ca4f19b864c6c64eb11d513c0a0449d0a"} Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.327633 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bh5dd" event={"ID":"d71d7360-3eef-4260-b288-7fc9f8d6fecc","Type":"ContainerDied","Data":"32fb159b08fc542583ffac618284c6e4f511d17cb0b9fde87e51d9a9e18968bb"} Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.327649 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2chhv" event={"ID":"1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741","Type":"ContainerDied","Data":"bc2d48e2bcd1d84bcbb6a63e1072958f4152c09cd788ca8223267d8dd8006290"} Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.327666 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j26gs" event={"ID":"b2f8e393-1ed3-4475-bd0b-e0af8867a07a","Type":"ContainerDied","Data":"3e5b1d06f0d9e62ef989468e5e4267088a933dc68084f463d493a086d3bcce78"} Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.327689 5107 scope.go:117] "RemoveContainer" containerID="4766a318aa053ca1b5b81553962d132be2196a9749a08ae6d6c74ccc97fc5675" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.332684 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bh5dd"] Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.334776 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" event={"ID":"d93df320-4284-49f0-b63d-ba8a86943f2e","Type":"ContainerDied","Data":"405cbb641a4f8745b92212ab993341c6d200e4b1a4c8fb6f258f763afb6975f8"} Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.335666 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-59jn5" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.336972 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bh5dd"] Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.354464 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gbddn"] Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.359522 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gbddn"] Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.372579 5107 scope.go:117] "RemoveContainer" containerID="17b412020faa68cbabd172ad15478d5ec84b05e4fc24012a2171a8483c4c1037" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.388608 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-59jn5"] Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.394315 5107 scope.go:117] "RemoveContainer" containerID="b1054e7d7f63c5344d78a93a8aac5d7206bfc6e3b4ecff76fe6c647e31d5adcb" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.397167 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-59jn5"] Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.410853 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j26gs"] Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.417344 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j26gs"] Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.419230 5107 scope.go:117] "RemoveContainer" containerID="c17e5e951e3f10c30c7488ada651ed12e6f0a0893b9447b8b18ae6da7137ed70" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.437220 5107 scope.go:117] "RemoveContainer" containerID="ef9e4e85d61e8fd9e5ef228771ddf68d0714d00cae59cbb1e507c84cbb21dd9f" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.456946 5107 scope.go:117] "RemoveContainer" containerID="f085612c488a1a673ed80660d81861a573e540f227bbdfdcccbd78f6623f2558" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.475655 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741" (UID: "1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.478694 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.484544 5107 scope.go:117] "RemoveContainer" containerID="4142fce0a9fb522dddaa4d0b5f63e9df61e441757754d34465acd838f437a033" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.500731 5107 scope.go:117] "RemoveContainer" containerID="4e075291ef63ff967d8d65c5d76fc09ba957bc13112c1a5afd02bc8cb8ed9544" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.518490 5107 scope.go:117] "RemoveContainer" containerID="e343434912061ab767cd5f0070acdd3eb8f610c4de32bd4adf42cebed94202be" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.535538 5107 scope.go:117] "RemoveContainer" containerID="b112e2d375a4a874f3e1836260e9227648dfec06b82235a8f7ca11c00e5377b1" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.555299 5107 scope.go:117] "RemoveContainer" containerID="0476b02eb4481b60b1f7622bb55be837270ec85faa0a7248a18d7b563efb96dd" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.578094 5107 scope.go:117] "RemoveContainer" containerID="382a98cd9a0790fc6419b0a60953c99d5126f685c796d293a38b6c8871715e39" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.598853 5107 scope.go:117] "RemoveContainer" containerID="ca9dec53aa7c93c365f90b547feb255a966e3d662679cdaef2eab8637f7f82e9" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.683314 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2chhv"] Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.692828 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2chhv"] Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.756037 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-56zdj"] Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.756855 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d93df320-4284-49f0-b63d-ba8a86943f2e" containerName="marketplace-operator" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.756982 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="d93df320-4284-49f0-b63d-ba8a86943f2e" containerName="marketplace-operator" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.757048 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d93df320-4284-49f0-b63d-ba8a86943f2e" containerName="marketplace-operator" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.757105 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="d93df320-4284-49f0-b63d-ba8a86943f2e" containerName="marketplace-operator" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.757170 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c0c7bec4-aeda-4946-9599-726d61c41d93" containerName="registry-server" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.757252 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c7bec4-aeda-4946-9599-726d61c41d93" containerName="registry-server" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.757316 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b2f8e393-1ed3-4475-bd0b-e0af8867a07a" containerName="extract-content" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.757371 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2f8e393-1ed3-4475-bd0b-e0af8867a07a" containerName="extract-content" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.757429 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b2f8e393-1ed3-4475-bd0b-e0af8867a07a" containerName="registry-server" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.757483 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2f8e393-1ed3-4475-bd0b-e0af8867a07a" containerName="registry-server" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.757538 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c0c7bec4-aeda-4946-9599-726d61c41d93" containerName="extract-utilities" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.757597 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c7bec4-aeda-4946-9599-726d61c41d93" containerName="extract-utilities" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.757670 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741" containerName="extract-utilities" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.757723 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741" containerName="extract-utilities" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.757809 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741" containerName="registry-server" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.757909 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741" containerName="registry-server" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.757990 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741" containerName="extract-content" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.758048 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741" containerName="extract-content" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.758282 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b2f8e393-1ed3-4475-bd0b-e0af8867a07a" containerName="extract-utilities" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.758362 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2f8e393-1ed3-4475-bd0b-e0af8867a07a" containerName="extract-utilities" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.758429 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c0c7bec4-aeda-4946-9599-726d61c41d93" containerName="extract-content" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.758491 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c7bec4-aeda-4946-9599-726d61c41d93" containerName="extract-content" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.758551 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d71d7360-3eef-4260-b288-7fc9f8d6fecc" containerName="registry-server" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.758606 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="d71d7360-3eef-4260-b288-7fc9f8d6fecc" containerName="registry-server" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.758668 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d71d7360-3eef-4260-b288-7fc9f8d6fecc" containerName="extract-content" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.758724 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="d71d7360-3eef-4260-b288-7fc9f8d6fecc" containerName="extract-content" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.758783 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d71d7360-3eef-4260-b288-7fc9f8d6fecc" containerName="extract-utilities" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.758841 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="d71d7360-3eef-4260-b288-7fc9f8d6fecc" containerName="extract-utilities" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.759020 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741" containerName="registry-server" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.759098 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="d93df320-4284-49f0-b63d-ba8a86943f2e" containerName="marketplace-operator" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.759161 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="d71d7360-3eef-4260-b288-7fc9f8d6fecc" containerName="registry-server" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.759224 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="c0c7bec4-aeda-4946-9599-726d61c41d93" containerName="registry-server" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.759282 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="b2f8e393-1ed3-4475-bd0b-e0af8867a07a" containerName="registry-server" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.759340 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="d93df320-4284-49f0-b63d-ba8a86943f2e" containerName="marketplace-operator" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.759398 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="d93df320-4284-49f0-b63d-ba8a86943f2e" containerName="marketplace-operator" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.759549 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d93df320-4284-49f0-b63d-ba8a86943f2e" containerName="marketplace-operator" Jan 26 00:16:16 crc kubenswrapper[5107]: I0126 00:16:16.759615 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="d93df320-4284-49f0-b63d-ba8a86943f2e" containerName="marketplace-operator" Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.355137 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-56zdj"] Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.355183 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xpnt2"] Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.355416 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-56zdj" Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.357649 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.493365 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xpnt2"] Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.493429 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-z8mjk" event={"ID":"f9693a56-8c67-49d4-86ef-00efbe7882a5","Type":"ContainerStarted","Data":"0123d1da4336828e50445db2bbd3fe0163e66e3206ae69cd7ece2d78d570a9d7"} Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.493586 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xpnt2" Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.494744 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-z8mjk" Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.496751 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.505845 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/437f5b38-eba1-4df5-88b1-40368d973099-catalog-content\") pod \"community-operators-56zdj\" (UID: \"437f5b38-eba1-4df5-88b1-40368d973099\") " pod="openshift-marketplace/community-operators-56zdj" Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.505907 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-z8mjk" Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.505948 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcbw8\" (UniqueName: \"kubernetes.io/projected/437f5b38-eba1-4df5-88b1-40368d973099-kube-api-access-jcbw8\") pod \"community-operators-56zdj\" (UID: \"437f5b38-eba1-4df5-88b1-40368d973099\") " pod="openshift-marketplace/community-operators-56zdj" Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.506220 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/437f5b38-eba1-4df5-88b1-40368d973099-utilities\") pod \"community-operators-56zdj\" (UID: \"437f5b38-eba1-4df5-88b1-40368d973099\") " pod="openshift-marketplace/community-operators-56zdj" Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.548429 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-z8mjk" podStartSLOduration=3.548412066 podStartE2EDuration="3.548412066s" podCreationTimestamp="2026-01-26 00:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:16:17.544411767 +0000 UTC m=+422.462006113" watchObservedRunningTime="2026-01-26 00:16:17.548412066 +0000 UTC m=+422.466006412" Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.607385 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jcbw8\" (UniqueName: \"kubernetes.io/projected/437f5b38-eba1-4df5-88b1-40368d973099-kube-api-access-jcbw8\") pod \"community-operators-56zdj\" (UID: \"437f5b38-eba1-4df5-88b1-40368d973099\") " pod="openshift-marketplace/community-operators-56zdj" Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.607448 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w4lm\" (UniqueName: \"kubernetes.io/projected/777fdb5a-d598-4e89-804c-c0a26fb1d077-kube-api-access-4w4lm\") pod \"certified-operators-xpnt2\" (UID: \"777fdb5a-d598-4e89-804c-c0a26fb1d077\") " pod="openshift-marketplace/certified-operators-xpnt2" Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.607509 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777fdb5a-d598-4e89-804c-c0a26fb1d077-utilities\") pod \"certified-operators-xpnt2\" (UID: \"777fdb5a-d598-4e89-804c-c0a26fb1d077\") " pod="openshift-marketplace/certified-operators-xpnt2" Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.607536 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777fdb5a-d598-4e89-804c-c0a26fb1d077-catalog-content\") pod \"certified-operators-xpnt2\" (UID: \"777fdb5a-d598-4e89-804c-c0a26fb1d077\") " pod="openshift-marketplace/certified-operators-xpnt2" Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.607582 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/437f5b38-eba1-4df5-88b1-40368d973099-utilities\") pod \"community-operators-56zdj\" (UID: \"437f5b38-eba1-4df5-88b1-40368d973099\") " pod="openshift-marketplace/community-operators-56zdj" Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.607621 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/437f5b38-eba1-4df5-88b1-40368d973099-catalog-content\") pod \"community-operators-56zdj\" (UID: \"437f5b38-eba1-4df5-88b1-40368d973099\") " pod="openshift-marketplace/community-operators-56zdj" Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.608074 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/437f5b38-eba1-4df5-88b1-40368d973099-catalog-content\") pod \"community-operators-56zdj\" (UID: \"437f5b38-eba1-4df5-88b1-40368d973099\") " pod="openshift-marketplace/community-operators-56zdj" Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.609308 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/437f5b38-eba1-4df5-88b1-40368d973099-utilities\") pod \"community-operators-56zdj\" (UID: \"437f5b38-eba1-4df5-88b1-40368d973099\") " pod="openshift-marketplace/community-operators-56zdj" Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.649377 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcbw8\" (UniqueName: \"kubernetes.io/projected/437f5b38-eba1-4df5-88b1-40368d973099-kube-api-access-jcbw8\") pod \"community-operators-56zdj\" (UID: \"437f5b38-eba1-4df5-88b1-40368d973099\") " pod="openshift-marketplace/community-operators-56zdj" Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.715161 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777fdb5a-d598-4e89-804c-c0a26fb1d077-utilities\") pod \"certified-operators-xpnt2\" (UID: \"777fdb5a-d598-4e89-804c-c0a26fb1d077\") " pod="openshift-marketplace/certified-operators-xpnt2" Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.715836 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777fdb5a-d598-4e89-804c-c0a26fb1d077-catalog-content\") pod \"certified-operators-xpnt2\" (UID: \"777fdb5a-d598-4e89-804c-c0a26fb1d077\") " pod="openshift-marketplace/certified-operators-xpnt2" Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.716159 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4w4lm\" (UniqueName: \"kubernetes.io/projected/777fdb5a-d598-4e89-804c-c0a26fb1d077-kube-api-access-4w4lm\") pod \"certified-operators-xpnt2\" (UID: \"777fdb5a-d598-4e89-804c-c0a26fb1d077\") " pod="openshift-marketplace/certified-operators-xpnt2" Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.716779 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777fdb5a-d598-4e89-804c-c0a26fb1d077-utilities\") pod \"certified-operators-xpnt2\" (UID: \"777fdb5a-d598-4e89-804c-c0a26fb1d077\") " pod="openshift-marketplace/certified-operators-xpnt2" Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.717005 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777fdb5a-d598-4e89-804c-c0a26fb1d077-catalog-content\") pod \"certified-operators-xpnt2\" (UID: \"777fdb5a-d598-4e89-804c-c0a26fb1d077\") " pod="openshift-marketplace/certified-operators-xpnt2" Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.745834 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w4lm\" (UniqueName: \"kubernetes.io/projected/777fdb5a-d598-4e89-804c-c0a26fb1d077-kube-api-access-4w4lm\") pod \"certified-operators-xpnt2\" (UID: \"777fdb5a-d598-4e89-804c-c0a26fb1d077\") " pod="openshift-marketplace/certified-operators-xpnt2" Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.803021 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-56zdj" Jan 26 00:16:17 crc kubenswrapper[5107]: I0126 00:16:17.814685 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xpnt2" Jan 26 00:16:17 crc kubenswrapper[5107]: E0126 00:16:17.844730 5107 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0c7bec4_aeda_4946_9599_726d61c41d93.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd71d7360_3eef_4260_b288_7fc9f8d6fecc.slice\": RecentStats: unable to find data in memory cache]" Jan 26 00:16:18 crc kubenswrapper[5107]: I0126 00:16:18.123454 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741" path="/var/lib/kubelet/pods/1efa3a82-61b0-4a38-8b2e-8c1f8d3e3741/volumes" Jan 26 00:16:18 crc kubenswrapper[5107]: I0126 00:16:18.124877 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2f8e393-1ed3-4475-bd0b-e0af8867a07a" path="/var/lib/kubelet/pods/b2f8e393-1ed3-4475-bd0b-e0af8867a07a/volumes" Jan 26 00:16:18 crc kubenswrapper[5107]: I0126 00:16:18.125770 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0c7bec4-aeda-4946-9599-726d61c41d93" path="/var/lib/kubelet/pods/c0c7bec4-aeda-4946-9599-726d61c41d93/volumes" Jan 26 00:16:18 crc kubenswrapper[5107]: I0126 00:16:18.127619 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d71d7360-3eef-4260-b288-7fc9f8d6fecc" path="/var/lib/kubelet/pods/d71d7360-3eef-4260-b288-7fc9f8d6fecc/volumes" Jan 26 00:16:18 crc kubenswrapper[5107]: I0126 00:16:18.128321 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d93df320-4284-49f0-b63d-ba8a86943f2e" path="/var/lib/kubelet/pods/d93df320-4284-49f0-b63d-ba8a86943f2e/volumes" Jan 26 00:16:18 crc kubenswrapper[5107]: I0126 00:16:18.415533 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-56zdj"] Jan 26 00:16:18 crc kubenswrapper[5107]: W0126 00:16:18.423687 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod437f5b38_eba1_4df5_88b1_40368d973099.slice/crio-1726086ef4859b121513c3748b766ecadec6213f0a99cfeb988e807d6e8eaa1d WatchSource:0}: Error finding container 1726086ef4859b121513c3748b766ecadec6213f0a99cfeb988e807d6e8eaa1d: Status 404 returned error can't find the container with id 1726086ef4859b121513c3748b766ecadec6213f0a99cfeb988e807d6e8eaa1d Jan 26 00:16:18 crc kubenswrapper[5107]: I0126 00:16:18.494784 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xpnt2"] Jan 26 00:16:18 crc kubenswrapper[5107]: W0126 00:16:18.499604 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod777fdb5a_d598_4e89_804c_c0a26fb1d077.slice/crio-faa5645a09e7cf3220e22b76f0a7d30b7a8b677f2c1002544932622f6e1b3deb WatchSource:0}: Error finding container faa5645a09e7cf3220e22b76f0a7d30b7a8b677f2c1002544932622f6e1b3deb: Status 404 returned error can't find the container with id faa5645a09e7cf3220e22b76f0a7d30b7a8b677f2c1002544932622f6e1b3deb Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.364132 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r6gc5"] Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.370436 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r6gc5" Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.374354 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.382937 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r6gc5"] Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.383618 5107 generic.go:358] "Generic (PLEG): container finished" podID="777fdb5a-d598-4e89-804c-c0a26fb1d077" containerID="943e067cea538acbaaf18260281e5468c74e065736eb2a4eba28978ae290ec19" exitCode=0 Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.384982 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpnt2" event={"ID":"777fdb5a-d598-4e89-804c-c0a26fb1d077","Type":"ContainerDied","Data":"943e067cea538acbaaf18260281e5468c74e065736eb2a4eba28978ae290ec19"} Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.385017 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpnt2" event={"ID":"777fdb5a-d598-4e89-804c-c0a26fb1d077","Type":"ContainerStarted","Data":"faa5645a09e7cf3220e22b76f0a7d30b7a8b677f2c1002544932622f6e1b3deb"} Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.386752 5107 generic.go:358] "Generic (PLEG): container finished" podID="437f5b38-eba1-4df5-88b1-40368d973099" containerID="b56ad0aed252b1e65e652ed8837886b5a88cf4ce127509aee07c42ad219ad5ad" exitCode=0 Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.386895 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56zdj" event={"ID":"437f5b38-eba1-4df5-88b1-40368d973099","Type":"ContainerDied","Data":"b56ad0aed252b1e65e652ed8837886b5a88cf4ce127509aee07c42ad219ad5ad"} Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.386918 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56zdj" event={"ID":"437f5b38-eba1-4df5-88b1-40368d973099","Type":"ContainerStarted","Data":"1726086ef4859b121513c3748b766ecadec6213f0a99cfeb988e807d6e8eaa1d"} Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.446724 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf80d393-0243-47e1-89a1-ce7110280256-catalog-content\") pod \"redhat-marketplace-r6gc5\" (UID: \"cf80d393-0243-47e1-89a1-ce7110280256\") " pod="openshift-marketplace/redhat-marketplace-r6gc5" Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.447345 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx26l\" (UniqueName: \"kubernetes.io/projected/cf80d393-0243-47e1-89a1-ce7110280256-kube-api-access-jx26l\") pod \"redhat-marketplace-r6gc5\" (UID: \"cf80d393-0243-47e1-89a1-ce7110280256\") " pod="openshift-marketplace/redhat-marketplace-r6gc5" Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.447636 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf80d393-0243-47e1-89a1-ce7110280256-utilities\") pod \"redhat-marketplace-r6gc5\" (UID: \"cf80d393-0243-47e1-89a1-ce7110280256\") " pod="openshift-marketplace/redhat-marketplace-r6gc5" Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.550029 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf80d393-0243-47e1-89a1-ce7110280256-utilities\") pod \"redhat-marketplace-r6gc5\" (UID: \"cf80d393-0243-47e1-89a1-ce7110280256\") " pod="openshift-marketplace/redhat-marketplace-r6gc5" Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.550384 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf80d393-0243-47e1-89a1-ce7110280256-catalog-content\") pod \"redhat-marketplace-r6gc5\" (UID: \"cf80d393-0243-47e1-89a1-ce7110280256\") " pod="openshift-marketplace/redhat-marketplace-r6gc5" Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.550866 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf80d393-0243-47e1-89a1-ce7110280256-utilities\") pod \"redhat-marketplace-r6gc5\" (UID: \"cf80d393-0243-47e1-89a1-ce7110280256\") " pod="openshift-marketplace/redhat-marketplace-r6gc5" Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.550944 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jx26l\" (UniqueName: \"kubernetes.io/projected/cf80d393-0243-47e1-89a1-ce7110280256-kube-api-access-jx26l\") pod \"redhat-marketplace-r6gc5\" (UID: \"cf80d393-0243-47e1-89a1-ce7110280256\") " pod="openshift-marketplace/redhat-marketplace-r6gc5" Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.551435 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf80d393-0243-47e1-89a1-ce7110280256-catalog-content\") pod \"redhat-marketplace-r6gc5\" (UID: \"cf80d393-0243-47e1-89a1-ce7110280256\") " pod="openshift-marketplace/redhat-marketplace-r6gc5" Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.574360 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g95mx"] Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.575203 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx26l\" (UniqueName: \"kubernetes.io/projected/cf80d393-0243-47e1-89a1-ce7110280256-kube-api-access-jx26l\") pod \"redhat-marketplace-r6gc5\" (UID: \"cf80d393-0243-47e1-89a1-ce7110280256\") " pod="openshift-marketplace/redhat-marketplace-r6gc5" Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.585340 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g95mx"] Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.585509 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g95mx" Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.588236 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.653113 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb46dfe5-5251-43ad-a7a1-7f52c860a08b-utilities\") pod \"redhat-operators-g95mx\" (UID: \"cb46dfe5-5251-43ad-a7a1-7f52c860a08b\") " pod="openshift-marketplace/redhat-operators-g95mx" Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.653177 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlzv8\" (UniqueName: \"kubernetes.io/projected/cb46dfe5-5251-43ad-a7a1-7f52c860a08b-kube-api-access-rlzv8\") pod \"redhat-operators-g95mx\" (UID: \"cb46dfe5-5251-43ad-a7a1-7f52c860a08b\") " pod="openshift-marketplace/redhat-operators-g95mx" Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.653225 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb46dfe5-5251-43ad-a7a1-7f52c860a08b-catalog-content\") pod \"redhat-operators-g95mx\" (UID: \"cb46dfe5-5251-43ad-a7a1-7f52c860a08b\") " pod="openshift-marketplace/redhat-operators-g95mx" Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.706395 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r6gc5" Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.754814 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb46dfe5-5251-43ad-a7a1-7f52c860a08b-utilities\") pod \"redhat-operators-g95mx\" (UID: \"cb46dfe5-5251-43ad-a7a1-7f52c860a08b\") " pod="openshift-marketplace/redhat-operators-g95mx" Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.754908 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rlzv8\" (UniqueName: \"kubernetes.io/projected/cb46dfe5-5251-43ad-a7a1-7f52c860a08b-kube-api-access-rlzv8\") pod \"redhat-operators-g95mx\" (UID: \"cb46dfe5-5251-43ad-a7a1-7f52c860a08b\") " pod="openshift-marketplace/redhat-operators-g95mx" Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.754949 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb46dfe5-5251-43ad-a7a1-7f52c860a08b-catalog-content\") pod \"redhat-operators-g95mx\" (UID: \"cb46dfe5-5251-43ad-a7a1-7f52c860a08b\") " pod="openshift-marketplace/redhat-operators-g95mx" Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.756060 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb46dfe5-5251-43ad-a7a1-7f52c860a08b-catalog-content\") pod \"redhat-operators-g95mx\" (UID: \"cb46dfe5-5251-43ad-a7a1-7f52c860a08b\") " pod="openshift-marketplace/redhat-operators-g95mx" Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.756183 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb46dfe5-5251-43ad-a7a1-7f52c860a08b-utilities\") pod \"redhat-operators-g95mx\" (UID: \"cb46dfe5-5251-43ad-a7a1-7f52c860a08b\") " pod="openshift-marketplace/redhat-operators-g95mx" Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.781260 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlzv8\" (UniqueName: \"kubernetes.io/projected/cb46dfe5-5251-43ad-a7a1-7f52c860a08b-kube-api-access-rlzv8\") pod \"redhat-operators-g95mx\" (UID: \"cb46dfe5-5251-43ad-a7a1-7f52c860a08b\") " pod="openshift-marketplace/redhat-operators-g95mx" Jan 26 00:16:19 crc kubenswrapper[5107]: I0126 00:16:19.917204 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g95mx" Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.047605 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-vv5pc"] Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.063325 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-vv5pc"] Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.063434 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.160267 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f4b3e4d8-24e2-4dec-8b92-0b089d1837f1-trusted-ca\") pod \"image-registry-5d9d95bf5b-vv5pc\" (UID: \"f4b3e4d8-24e2-4dec-8b92-0b089d1837f1\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.160602 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f4b3e4d8-24e2-4dec-8b92-0b089d1837f1-registry-tls\") pod \"image-registry-5d9d95bf5b-vv5pc\" (UID: \"f4b3e4d8-24e2-4dec-8b92-0b089d1837f1\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.160627 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f4b3e4d8-24e2-4dec-8b92-0b089d1837f1-registry-certificates\") pod \"image-registry-5d9d95bf5b-vv5pc\" (UID: \"f4b3e4d8-24e2-4dec-8b92-0b089d1837f1\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.160696 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-vv5pc\" (UID: \"f4b3e4d8-24e2-4dec-8b92-0b089d1837f1\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.160740 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9wnk\" (UniqueName: \"kubernetes.io/projected/f4b3e4d8-24e2-4dec-8b92-0b089d1837f1-kube-api-access-f9wnk\") pod \"image-registry-5d9d95bf5b-vv5pc\" (UID: \"f4b3e4d8-24e2-4dec-8b92-0b089d1837f1\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.160767 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f4b3e4d8-24e2-4dec-8b92-0b089d1837f1-bound-sa-token\") pod \"image-registry-5d9d95bf5b-vv5pc\" (UID: \"f4b3e4d8-24e2-4dec-8b92-0b089d1837f1\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.160788 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f4b3e4d8-24e2-4dec-8b92-0b089d1837f1-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-vv5pc\" (UID: \"f4b3e4d8-24e2-4dec-8b92-0b089d1837f1\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.160813 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f4b3e4d8-24e2-4dec-8b92-0b089d1837f1-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-vv5pc\" (UID: \"f4b3e4d8-24e2-4dec-8b92-0b089d1837f1\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.189980 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r6gc5"] Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.199728 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-vv5pc\" (UID: \"f4b3e4d8-24e2-4dec-8b92-0b089d1837f1\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.268025 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f4b3e4d8-24e2-4dec-8b92-0b089d1837f1-trusted-ca\") pod \"image-registry-5d9d95bf5b-vv5pc\" (UID: \"f4b3e4d8-24e2-4dec-8b92-0b089d1837f1\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.268090 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f4b3e4d8-24e2-4dec-8b92-0b089d1837f1-registry-tls\") pod \"image-registry-5d9d95bf5b-vv5pc\" (UID: \"f4b3e4d8-24e2-4dec-8b92-0b089d1837f1\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.268118 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f4b3e4d8-24e2-4dec-8b92-0b089d1837f1-registry-certificates\") pod \"image-registry-5d9d95bf5b-vv5pc\" (UID: \"f4b3e4d8-24e2-4dec-8b92-0b089d1837f1\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.268213 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f9wnk\" (UniqueName: \"kubernetes.io/projected/f4b3e4d8-24e2-4dec-8b92-0b089d1837f1-kube-api-access-f9wnk\") pod \"image-registry-5d9d95bf5b-vv5pc\" (UID: \"f4b3e4d8-24e2-4dec-8b92-0b089d1837f1\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.268245 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f4b3e4d8-24e2-4dec-8b92-0b089d1837f1-bound-sa-token\") pod \"image-registry-5d9d95bf5b-vv5pc\" (UID: \"f4b3e4d8-24e2-4dec-8b92-0b089d1837f1\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.268266 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f4b3e4d8-24e2-4dec-8b92-0b089d1837f1-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-vv5pc\" (UID: \"f4b3e4d8-24e2-4dec-8b92-0b089d1837f1\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.268287 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f4b3e4d8-24e2-4dec-8b92-0b089d1837f1-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-vv5pc\" (UID: \"f4b3e4d8-24e2-4dec-8b92-0b089d1837f1\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.277151 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f4b3e4d8-24e2-4dec-8b92-0b089d1837f1-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-vv5pc\" (UID: \"f4b3e4d8-24e2-4dec-8b92-0b089d1837f1\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.282290 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f4b3e4d8-24e2-4dec-8b92-0b089d1837f1-trusted-ca\") pod \"image-registry-5d9d95bf5b-vv5pc\" (UID: \"f4b3e4d8-24e2-4dec-8b92-0b089d1837f1\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.289004 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f4b3e4d8-24e2-4dec-8b92-0b089d1837f1-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-vv5pc\" (UID: \"f4b3e4d8-24e2-4dec-8b92-0b089d1837f1\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.290335 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f4b3e4d8-24e2-4dec-8b92-0b089d1837f1-registry-certificates\") pod \"image-registry-5d9d95bf5b-vv5pc\" (UID: \"f4b3e4d8-24e2-4dec-8b92-0b089d1837f1\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.291135 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f4b3e4d8-24e2-4dec-8b92-0b089d1837f1-registry-tls\") pod \"image-registry-5d9d95bf5b-vv5pc\" (UID: \"f4b3e4d8-24e2-4dec-8b92-0b089d1837f1\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.296949 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f4b3e4d8-24e2-4dec-8b92-0b089d1837f1-bound-sa-token\") pod \"image-registry-5d9d95bf5b-vv5pc\" (UID: \"f4b3e4d8-24e2-4dec-8b92-0b089d1837f1\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.296983 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9wnk\" (UniqueName: \"kubernetes.io/projected/f4b3e4d8-24e2-4dec-8b92-0b089d1837f1-kube-api-access-f9wnk\") pod \"image-registry-5d9d95bf5b-vv5pc\" (UID: \"f4b3e4d8-24e2-4dec-8b92-0b089d1837f1\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.394782 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpnt2" event={"ID":"777fdb5a-d598-4e89-804c-c0a26fb1d077","Type":"ContainerStarted","Data":"f26930708dc481784d24553601c24eaaa908ec0fa677360c6f6629d52c606d6d"} Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.405309 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56zdj" event={"ID":"437f5b38-eba1-4df5-88b1-40368d973099","Type":"ContainerStarted","Data":"5a37a4cdaabb55abba3b0e6c417bfd5bb0ba5c8f3fbbe457fc1da03fccba6055"} Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.407783 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6gc5" event={"ID":"cf80d393-0243-47e1-89a1-ce7110280256","Type":"ContainerStarted","Data":"87e3116476b762fc3a70cffa5eba64ce44e37bdfc1bc18eebc234c9d7799b215"} Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.472488 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.529701 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g95mx"] Jan 26 00:16:20 crc kubenswrapper[5107]: W0126 00:16:20.645096 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb46dfe5_5251_43ad_a7a1_7f52c860a08b.slice/crio-c2b7994ca3167e91059bbc14f58b69efc258e65c88768452f2dc81e1486a2479 WatchSource:0}: Error finding container c2b7994ca3167e91059bbc14f58b69efc258e65c88768452f2dc81e1486a2479: Status 404 returned error can't find the container with id c2b7994ca3167e91059bbc14f58b69efc258e65c88768452f2dc81e1486a2479 Jan 26 00:16:20 crc kubenswrapper[5107]: I0126 00:16:20.919393 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-vv5pc"] Jan 26 00:16:20 crc kubenswrapper[5107]: W0126 00:16:20.925752 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b3e4d8_24e2_4dec_8b92_0b089d1837f1.slice/crio-495cc0aa2334ad7fb450881da970540b8c3644ca4aec7e1b4fb298a94b6f6723 WatchSource:0}: Error finding container 495cc0aa2334ad7fb450881da970540b8c3644ca4aec7e1b4fb298a94b6f6723: Status 404 returned error can't find the container with id 495cc0aa2334ad7fb450881da970540b8c3644ca4aec7e1b4fb298a94b6f6723 Jan 26 00:16:21 crc kubenswrapper[5107]: I0126 00:16:21.418522 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" event={"ID":"f4b3e4d8-24e2-4dec-8b92-0b089d1837f1","Type":"ContainerStarted","Data":"e7afc0ec0e1cbcfb3fedc16fd042d5a5634aa1b8324a15b7deffec06e4f1ab8a"} Jan 26 00:16:21 crc kubenswrapper[5107]: I0126 00:16:21.418595 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" event={"ID":"f4b3e4d8-24e2-4dec-8b92-0b089d1837f1","Type":"ContainerStarted","Data":"495cc0aa2334ad7fb450881da970540b8c3644ca4aec7e1b4fb298a94b6f6723"} Jan 26 00:16:21 crc kubenswrapper[5107]: I0126 00:16:21.418951 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" Jan 26 00:16:21 crc kubenswrapper[5107]: I0126 00:16:21.423199 5107 generic.go:358] "Generic (PLEG): container finished" podID="cb46dfe5-5251-43ad-a7a1-7f52c860a08b" containerID="735b3e7482f1e5bc9d7632bd7a43c10ffd465e68fa5cef1e4329101387bcc10d" exitCode=0 Jan 26 00:16:21 crc kubenswrapper[5107]: I0126 00:16:21.423354 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g95mx" event={"ID":"cb46dfe5-5251-43ad-a7a1-7f52c860a08b","Type":"ContainerDied","Data":"735b3e7482f1e5bc9d7632bd7a43c10ffd465e68fa5cef1e4329101387bcc10d"} Jan 26 00:16:21 crc kubenswrapper[5107]: I0126 00:16:21.423392 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g95mx" event={"ID":"cb46dfe5-5251-43ad-a7a1-7f52c860a08b","Type":"ContainerStarted","Data":"c2b7994ca3167e91059bbc14f58b69efc258e65c88768452f2dc81e1486a2479"} Jan 26 00:16:21 crc kubenswrapper[5107]: I0126 00:16:21.434843 5107 generic.go:358] "Generic (PLEG): container finished" podID="cf80d393-0243-47e1-89a1-ce7110280256" containerID="4cb79ab214daf1415e64d24429e2ca2b789ebbf6b72c922b1b0aaa4a1931ef15" exitCode=0 Jan 26 00:16:21 crc kubenswrapper[5107]: I0126 00:16:21.435011 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6gc5" event={"ID":"cf80d393-0243-47e1-89a1-ce7110280256","Type":"ContainerDied","Data":"4cb79ab214daf1415e64d24429e2ca2b789ebbf6b72c922b1b0aaa4a1931ef15"} Jan 26 00:16:21 crc kubenswrapper[5107]: I0126 00:16:21.439070 5107 generic.go:358] "Generic (PLEG): container finished" podID="777fdb5a-d598-4e89-804c-c0a26fb1d077" containerID="f26930708dc481784d24553601c24eaaa908ec0fa677360c6f6629d52c606d6d" exitCode=0 Jan 26 00:16:21 crc kubenswrapper[5107]: I0126 00:16:21.439169 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpnt2" event={"ID":"777fdb5a-d598-4e89-804c-c0a26fb1d077","Type":"ContainerDied","Data":"f26930708dc481784d24553601c24eaaa908ec0fa677360c6f6629d52c606d6d"} Jan 26 00:16:21 crc kubenswrapper[5107]: I0126 00:16:21.444310 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" podStartSLOduration=1.444292639 podStartE2EDuration="1.444292639s" podCreationTimestamp="2026-01-26 00:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:16:21.437869949 +0000 UTC m=+426.355464295" watchObservedRunningTime="2026-01-26 00:16:21.444292639 +0000 UTC m=+426.361886985" Jan 26 00:16:21 crc kubenswrapper[5107]: I0126 00:16:21.450352 5107 generic.go:358] "Generic (PLEG): container finished" podID="437f5b38-eba1-4df5-88b1-40368d973099" containerID="5a37a4cdaabb55abba3b0e6c417bfd5bb0ba5c8f3fbbe457fc1da03fccba6055" exitCode=0 Jan 26 00:16:21 crc kubenswrapper[5107]: I0126 00:16:21.450478 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56zdj" event={"ID":"437f5b38-eba1-4df5-88b1-40368d973099","Type":"ContainerDied","Data":"5a37a4cdaabb55abba3b0e6c417bfd5bb0ba5c8f3fbbe457fc1da03fccba6055"} Jan 26 00:16:22 crc kubenswrapper[5107]: I0126 00:16:22.460068 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpnt2" event={"ID":"777fdb5a-d598-4e89-804c-c0a26fb1d077","Type":"ContainerStarted","Data":"38cd8967b84946cc082c2ee140f39d905e24062c997cec374ea47d550d0fda3b"} Jan 26 00:16:22 crc kubenswrapper[5107]: I0126 00:16:22.466159 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56zdj" event={"ID":"437f5b38-eba1-4df5-88b1-40368d973099","Type":"ContainerStarted","Data":"69579458d47f026ecf3e018b572b05167906f931e9ec4fec6974efeddb4825ec"} Jan 26 00:16:22 crc kubenswrapper[5107]: I0126 00:16:22.495204 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xpnt2" podStartSLOduration=5.743854487 podStartE2EDuration="6.495181587s" podCreationTimestamp="2026-01-26 00:16:16 +0000 UTC" firstStartedPulling="2026-01-26 00:16:19.384806166 +0000 UTC m=+424.302400512" lastFinishedPulling="2026-01-26 00:16:20.136133266 +0000 UTC m=+425.053727612" observedRunningTime="2026-01-26 00:16:22.484141761 +0000 UTC m=+427.401736117" watchObservedRunningTime="2026-01-26 00:16:22.495181587 +0000 UTC m=+427.412775933" Jan 26 00:16:22 crc kubenswrapper[5107]: I0126 00:16:22.518528 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-56zdj" podStartSLOduration=5.868841056 podStartE2EDuration="6.518483877s" podCreationTimestamp="2026-01-26 00:16:16 +0000 UTC" firstStartedPulling="2026-01-26 00:16:19.3880061 +0000 UTC m=+424.305600446" lastFinishedPulling="2026-01-26 00:16:20.037648931 +0000 UTC m=+424.955243267" observedRunningTime="2026-01-26 00:16:22.506682688 +0000 UTC m=+427.424277034" watchObservedRunningTime="2026-01-26 00:16:22.518483877 +0000 UTC m=+427.436078223" Jan 26 00:16:23 crc kubenswrapper[5107]: I0126 00:16:23.474273 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g95mx" event={"ID":"cb46dfe5-5251-43ad-a7a1-7f52c860a08b","Type":"ContainerStarted","Data":"c9cb6ec95f8a9ce457dedf94a92d9e39b08aeafdc6340f57909d1c8568d0ed0b"} Jan 26 00:16:24 crc kubenswrapper[5107]: I0126 00:16:24.484293 5107 generic.go:358] "Generic (PLEG): container finished" podID="cb46dfe5-5251-43ad-a7a1-7f52c860a08b" containerID="c9cb6ec95f8a9ce457dedf94a92d9e39b08aeafdc6340f57909d1c8568d0ed0b" exitCode=0 Jan 26 00:16:24 crc kubenswrapper[5107]: I0126 00:16:24.484400 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g95mx" event={"ID":"cb46dfe5-5251-43ad-a7a1-7f52c860a08b","Type":"ContainerDied","Data":"c9cb6ec95f8a9ce457dedf94a92d9e39b08aeafdc6340f57909d1c8568d0ed0b"} Jan 26 00:16:25 crc kubenswrapper[5107]: I0126 00:16:25.495799 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g95mx" event={"ID":"cb46dfe5-5251-43ad-a7a1-7f52c860a08b","Type":"ContainerStarted","Data":"a2bce7dd5ada8003ed8c8e0af5049f905d4b8e47892b53cc3c6e55d6c64573f8"} Jan 26 00:16:25 crc kubenswrapper[5107]: I0126 00:16:25.514865 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g95mx" podStartSLOduration=5.01378989 podStartE2EDuration="6.514847564s" podCreationTimestamp="2026-01-26 00:16:19 +0000 UTC" firstStartedPulling="2026-01-26 00:16:21.424509264 +0000 UTC m=+426.342103600" lastFinishedPulling="2026-01-26 00:16:22.925566928 +0000 UTC m=+427.843161274" observedRunningTime="2026-01-26 00:16:25.512825324 +0000 UTC m=+430.430419670" watchObservedRunningTime="2026-01-26 00:16:25.514847564 +0000 UTC m=+430.432441910" Jan 26 00:16:27 crc kubenswrapper[5107]: I0126 00:16:27.518292 5107 generic.go:358] "Generic (PLEG): container finished" podID="cf80d393-0243-47e1-89a1-ce7110280256" containerID="2138282670cae47329d197e2a309ca1a84896edbbb16bc947144a7176f07ed3f" exitCode=0 Jan 26 00:16:27 crc kubenswrapper[5107]: I0126 00:16:27.520538 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6gc5" event={"ID":"cf80d393-0243-47e1-89a1-ce7110280256","Type":"ContainerDied","Data":"2138282670cae47329d197e2a309ca1a84896edbbb16bc947144a7176f07ed3f"} Jan 26 00:16:27 crc kubenswrapper[5107]: I0126 00:16:27.803273 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-56zdj" Jan 26 00:16:27 crc kubenswrapper[5107]: I0126 00:16:27.803774 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-56zdj" Jan 26 00:16:27 crc kubenswrapper[5107]: I0126 00:16:27.815166 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-xpnt2" Jan 26 00:16:27 crc kubenswrapper[5107]: I0126 00:16:27.816150 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xpnt2" Jan 26 00:16:27 crc kubenswrapper[5107]: I0126 00:16:27.872737 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-56zdj" Jan 26 00:16:27 crc kubenswrapper[5107]: I0126 00:16:27.888121 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xpnt2" Jan 26 00:16:27 crc kubenswrapper[5107]: E0126 00:16:27.989925 5107 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd71d7360_3eef_4260_b288_7fc9f8d6fecc.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0c7bec4_aeda_4946_9599_726d61c41d93.slice\": RecentStats: unable to find data in memory cache]" Jan 26 00:16:28 crc kubenswrapper[5107]: I0126 00:16:28.530231 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6gc5" event={"ID":"cf80d393-0243-47e1-89a1-ce7110280256","Type":"ContainerStarted","Data":"bab410741134ab80c6ebd8f81c890946b46e2f8161e2bc467bfa8f491d39113b"} Jan 26 00:16:28 crc kubenswrapper[5107]: I0126 00:16:28.557400 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r6gc5" podStartSLOduration=4.009684488 podStartE2EDuration="9.557374969s" podCreationTimestamp="2026-01-26 00:16:19 +0000 UTC" firstStartedPulling="2026-01-26 00:16:21.435829019 +0000 UTC m=+426.353423365" lastFinishedPulling="2026-01-26 00:16:26.9835195 +0000 UTC m=+431.901113846" observedRunningTime="2026-01-26 00:16:28.556317557 +0000 UTC m=+433.473911903" watchObservedRunningTime="2026-01-26 00:16:28.557374969 +0000 UTC m=+433.474969315" Jan 26 00:16:28 crc kubenswrapper[5107]: I0126 00:16:28.658538 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xpnt2" Jan 26 00:16:28 crc kubenswrapper[5107]: I0126 00:16:28.660941 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-56zdj" Jan 26 00:16:29 crc kubenswrapper[5107]: I0126 00:16:29.707234 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-r6gc5" Jan 26 00:16:29 crc kubenswrapper[5107]: I0126 00:16:29.709570 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r6gc5" Jan 26 00:16:29 crc kubenswrapper[5107]: I0126 00:16:29.759804 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r6gc5" Jan 26 00:16:29 crc kubenswrapper[5107]: I0126 00:16:29.918396 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-g95mx" Jan 26 00:16:29 crc kubenswrapper[5107]: I0126 00:16:29.918488 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g95mx" Jan 26 00:16:30 crc kubenswrapper[5107]: I0126 00:16:30.726516 5107 patch_prober.go:28] interesting pod/machine-config-daemon-94c4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:16:30 crc kubenswrapper[5107]: I0126 00:16:30.727107 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:16:30 crc kubenswrapper[5107]: I0126 00:16:30.966127 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g95mx" podUID="cb46dfe5-5251-43ad-a7a1-7f52c860a08b" containerName="registry-server" probeResult="failure" output=< Jan 26 00:16:30 crc kubenswrapper[5107]: timeout: failed to connect service ":50051" within 1s Jan 26 00:16:30 crc kubenswrapper[5107]: > Jan 26 00:16:38 crc kubenswrapper[5107]: E0126 00:16:38.102592 5107 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd71d7360_3eef_4260_b288_7fc9f8d6fecc.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0c7bec4_aeda_4946_9599_726d61c41d93.slice\": RecentStats: unable to find data in memory cache]" Jan 26 00:16:39 crc kubenswrapper[5107]: I0126 00:16:39.981588 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g95mx" Jan 26 00:16:40 crc kubenswrapper[5107]: I0126 00:16:40.036161 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g95mx" Jan 26 00:16:41 crc kubenswrapper[5107]: I0126 00:16:41.696804 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r6gc5" Jan 26 00:16:42 crc kubenswrapper[5107]: I0126 00:16:42.472901 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-vv5pc" Jan 26 00:16:42 crc kubenswrapper[5107]: I0126 00:16:42.534189 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-5hcgj"] Jan 26 00:16:48 crc kubenswrapper[5107]: E0126 00:16:48.218008 5107 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd71d7360_3eef_4260_b288_7fc9f8d6fecc.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0c7bec4_aeda_4946_9599_726d61c41d93.slice\": RecentStats: unable to find data in memory cache]" Jan 26 00:16:58 crc kubenswrapper[5107]: E0126 00:16:58.365753 5107 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0c7bec4_aeda_4946_9599_726d61c41d93.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd71d7360_3eef_4260_b288_7fc9f8d6fecc.slice\": RecentStats: unable to find data in memory cache]" Jan 26 00:17:00 crc kubenswrapper[5107]: I0126 00:17:00.723862 5107 patch_prober.go:28] interesting pod/machine-config-daemon-94c4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:17:00 crc kubenswrapper[5107]: I0126 00:17:00.724327 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:17:07 crc kubenswrapper[5107]: I0126 00:17:07.576348 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" podUID="ae7da3db-5cbd-40ff-adfb-417c0d055042" containerName="registry" containerID="cri-o://31c05d3c96b7c6ff6439ca5881a48f83b8d1dfb9fd7b74ab245de1b77dbbce8a" gracePeriod=30 Jan 26 00:17:08 crc kubenswrapper[5107]: E0126 00:17:08.536962 5107 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd71d7360_3eef_4260_b288_7fc9f8d6fecc.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0c7bec4_aeda_4946_9599_726d61c41d93.slice\": RecentStats: unable to find data in memory cache]" Jan 26 00:17:09 crc kubenswrapper[5107]: I0126 00:17:09.933399 5107 generic.go:358] "Generic (PLEG): container finished" podID="ae7da3db-5cbd-40ff-adfb-417c0d055042" containerID="31c05d3c96b7c6ff6439ca5881a48f83b8d1dfb9fd7b74ab245de1b77dbbce8a" exitCode=0 Jan 26 00:17:09 crc kubenswrapper[5107]: I0126 00:17:09.933484 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" event={"ID":"ae7da3db-5cbd-40ff-adfb-417c0d055042","Type":"ContainerDied","Data":"31c05d3c96b7c6ff6439ca5881a48f83b8d1dfb9fd7b74ab245de1b77dbbce8a"} Jan 26 00:17:10 crc kubenswrapper[5107]: I0126 00:17:10.733683 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:17:10 crc kubenswrapper[5107]: I0126 00:17:10.811495 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ae7da3db-5cbd-40ff-adfb-417c0d055042-ca-trust-extracted\") pod \"ae7da3db-5cbd-40ff-adfb-417c0d055042\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " Jan 26 00:17:10 crc kubenswrapper[5107]: I0126 00:17:10.811955 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"ae7da3db-5cbd-40ff-adfb-417c0d055042\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " Jan 26 00:17:10 crc kubenswrapper[5107]: I0126 00:17:10.812013 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ae7da3db-5cbd-40ff-adfb-417c0d055042-bound-sa-token\") pod \"ae7da3db-5cbd-40ff-adfb-417c0d055042\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " Jan 26 00:17:10 crc kubenswrapper[5107]: I0126 00:17:10.812152 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8fdz\" (UniqueName: \"kubernetes.io/projected/ae7da3db-5cbd-40ff-adfb-417c0d055042-kube-api-access-l8fdz\") pod \"ae7da3db-5cbd-40ff-adfb-417c0d055042\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " Jan 26 00:17:10 crc kubenswrapper[5107]: I0126 00:17:10.812189 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ae7da3db-5cbd-40ff-adfb-417c0d055042-registry-tls\") pod \"ae7da3db-5cbd-40ff-adfb-417c0d055042\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " Jan 26 00:17:10 crc kubenswrapper[5107]: I0126 00:17:10.812365 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae7da3db-5cbd-40ff-adfb-417c0d055042-trusted-ca\") pod \"ae7da3db-5cbd-40ff-adfb-417c0d055042\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " Jan 26 00:17:10 crc kubenswrapper[5107]: I0126 00:17:10.812558 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ae7da3db-5cbd-40ff-adfb-417c0d055042-installation-pull-secrets\") pod \"ae7da3db-5cbd-40ff-adfb-417c0d055042\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " Jan 26 00:17:10 crc kubenswrapper[5107]: I0126 00:17:10.812617 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ae7da3db-5cbd-40ff-adfb-417c0d055042-registry-certificates\") pod \"ae7da3db-5cbd-40ff-adfb-417c0d055042\" (UID: \"ae7da3db-5cbd-40ff-adfb-417c0d055042\") " Jan 26 00:17:10 crc kubenswrapper[5107]: I0126 00:17:10.813385 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae7da3db-5cbd-40ff-adfb-417c0d055042-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "ae7da3db-5cbd-40ff-adfb-417c0d055042" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:17:10 crc kubenswrapper[5107]: I0126 00:17:10.814415 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae7da3db-5cbd-40ff-adfb-417c0d055042-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "ae7da3db-5cbd-40ff-adfb-417c0d055042" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:17:10 crc kubenswrapper[5107]: I0126 00:17:10.820935 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae7da3db-5cbd-40ff-adfb-417c0d055042-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "ae7da3db-5cbd-40ff-adfb-417c0d055042" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:17:10 crc kubenswrapper[5107]: I0126 00:17:10.822592 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae7da3db-5cbd-40ff-adfb-417c0d055042-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "ae7da3db-5cbd-40ff-adfb-417c0d055042" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:17:10 crc kubenswrapper[5107]: I0126 00:17:10.822990 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae7da3db-5cbd-40ff-adfb-417c0d055042-kube-api-access-l8fdz" (OuterVolumeSpecName: "kube-api-access-l8fdz") pod "ae7da3db-5cbd-40ff-adfb-417c0d055042" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042"). InnerVolumeSpecName "kube-api-access-l8fdz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:17:10 crc kubenswrapper[5107]: I0126 00:17:10.823138 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae7da3db-5cbd-40ff-adfb-417c0d055042-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "ae7da3db-5cbd-40ff-adfb-417c0d055042" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:17:10 crc kubenswrapper[5107]: I0126 00:17:10.826734 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "ae7da3db-5cbd-40ff-adfb-417c0d055042" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 26 00:17:10 crc kubenswrapper[5107]: I0126 00:17:10.835264 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae7da3db-5cbd-40ff-adfb-417c0d055042-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "ae7da3db-5cbd-40ff-adfb-417c0d055042" (UID: "ae7da3db-5cbd-40ff-adfb-417c0d055042"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:17:10 crc kubenswrapper[5107]: I0126 00:17:10.914460 5107 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ae7da3db-5cbd-40ff-adfb-417c0d055042-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 00:17:10 crc kubenswrapper[5107]: I0126 00:17:10.914506 5107 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ae7da3db-5cbd-40ff-adfb-417c0d055042-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 00:17:10 crc kubenswrapper[5107]: I0126 00:17:10.914515 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l8fdz\" (UniqueName: \"kubernetes.io/projected/ae7da3db-5cbd-40ff-adfb-417c0d055042-kube-api-access-l8fdz\") on node \"crc\" DevicePath \"\"" Jan 26 00:17:10 crc kubenswrapper[5107]: I0126 00:17:10.914529 5107 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ae7da3db-5cbd-40ff-adfb-417c0d055042-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:17:10 crc kubenswrapper[5107]: I0126 00:17:10.914537 5107 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae7da3db-5cbd-40ff-adfb-417c0d055042-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:17:10 crc kubenswrapper[5107]: I0126 00:17:10.914546 5107 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ae7da3db-5cbd-40ff-adfb-417c0d055042-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:17:10 crc kubenswrapper[5107]: I0126 00:17:10.914555 5107 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ae7da3db-5cbd-40ff-adfb-417c0d055042-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 00:17:10 crc kubenswrapper[5107]: I0126 00:17:10.943683 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" event={"ID":"ae7da3db-5cbd-40ff-adfb-417c0d055042","Type":"ContainerDied","Data":"77d998845b4c71fcb32490bc1bffdda6ea959390a9b72e8e0e61cbd21aef863e"} Jan 26 00:17:10 crc kubenswrapper[5107]: I0126 00:17:10.943760 5107 scope.go:117] "RemoveContainer" containerID="31c05d3c96b7c6ff6439ca5881a48f83b8d1dfb9fd7b74ab245de1b77dbbce8a" Jan 26 00:17:10 crc kubenswrapper[5107]: I0126 00:17:10.943704 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-5hcgj" Jan 26 00:17:10 crc kubenswrapper[5107]: I0126 00:17:10.987603 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-5hcgj"] Jan 26 00:17:10 crc kubenswrapper[5107]: I0126 00:17:10.996433 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-5hcgj"] Jan 26 00:17:12 crc kubenswrapper[5107]: I0126 00:17:12.124084 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae7da3db-5cbd-40ff-adfb-417c0d055042" path="/var/lib/kubelet/pods/ae7da3db-5cbd-40ff-adfb-417c0d055042/volumes" Jan 26 00:17:30 crc kubenswrapper[5107]: I0126 00:17:30.723345 5107 patch_prober.go:28] interesting pod/machine-config-daemon-94c4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:17:30 crc kubenswrapper[5107]: I0126 00:17:30.724060 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:17:30 crc kubenswrapper[5107]: I0126 00:17:30.724107 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" Jan 26 00:17:30 crc kubenswrapper[5107]: I0126 00:17:30.724721 5107 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e8533d9d343a82eee105ed10898832c472e05f5b38002db52b15945774cae6a3"} pod="openshift-machine-config-operator/machine-config-daemon-94c4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:17:30 crc kubenswrapper[5107]: I0126 00:17:30.724801 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" containerID="cri-o://e8533d9d343a82eee105ed10898832c472e05f5b38002db52b15945774cae6a3" gracePeriod=600 Jan 26 00:17:32 crc kubenswrapper[5107]: I0126 00:17:32.084384 5107 generic.go:358] "Generic (PLEG): container finished" podID="7d907601-1852-43f9-8a70-ef4e71351e81" containerID="e8533d9d343a82eee105ed10898832c472e05f5b38002db52b15945774cae6a3" exitCode=0 Jan 26 00:17:32 crc kubenswrapper[5107]: I0126 00:17:32.084465 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" event={"ID":"7d907601-1852-43f9-8a70-ef4e71351e81","Type":"ContainerDied","Data":"e8533d9d343a82eee105ed10898832c472e05f5b38002db52b15945774cae6a3"} Jan 26 00:17:32 crc kubenswrapper[5107]: I0126 00:17:32.085029 5107 scope.go:117] "RemoveContainer" containerID="c034d499a3fa7451c5b69f34167ce0e89f56510875068ff8a2d30e2dd29b5599" Jan 26 00:17:34 crc kubenswrapper[5107]: I0126 00:17:34.111114 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" event={"ID":"7d907601-1852-43f9-8a70-ef4e71351e81","Type":"ContainerStarted","Data":"fe8dfc3c3a0dc6bbcbdfa5c6d9274312703627715669e0a705943cc27e300da3"} Jan 26 00:17:46 crc kubenswrapper[5107]: I0126 00:17:46.818684 5107 ???:1] "http: TLS handshake error from 192.168.126.11:43908: no serving certificate available for the kubelet" Jan 26 00:18:00 crc kubenswrapper[5107]: I0126 00:18:00.140836 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489778-pbg8c"] Jan 26 00:18:00 crc kubenswrapper[5107]: I0126 00:18:00.142578 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ae7da3db-5cbd-40ff-adfb-417c0d055042" containerName="registry" Jan 26 00:18:00 crc kubenswrapper[5107]: I0126 00:18:00.142600 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae7da3db-5cbd-40ff-adfb-417c0d055042" containerName="registry" Jan 26 00:18:00 crc kubenswrapper[5107]: I0126 00:18:00.142770 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="ae7da3db-5cbd-40ff-adfb-417c0d055042" containerName="registry" Jan 26 00:18:00 crc kubenswrapper[5107]: I0126 00:18:00.151488 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489778-pbg8c" Jan 26 00:18:00 crc kubenswrapper[5107]: I0126 00:18:00.151870 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489778-pbg8c"] Jan 26 00:18:00 crc kubenswrapper[5107]: I0126 00:18:00.155065 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:18:00 crc kubenswrapper[5107]: I0126 00:18:00.155107 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:18:00 crc kubenswrapper[5107]: I0126 00:18:00.158957 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-96gbq\"" Jan 26 00:18:00 crc kubenswrapper[5107]: I0126 00:18:00.175303 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcmsq\" (UniqueName: \"kubernetes.io/projected/e2bb130b-2e77-4120-b7f1-9a67acbbbb4c-kube-api-access-kcmsq\") pod \"auto-csr-approver-29489778-pbg8c\" (UID: \"e2bb130b-2e77-4120-b7f1-9a67acbbbb4c\") " pod="openshift-infra/auto-csr-approver-29489778-pbg8c" Jan 26 00:18:00 crc kubenswrapper[5107]: I0126 00:18:00.276815 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kcmsq\" (UniqueName: \"kubernetes.io/projected/e2bb130b-2e77-4120-b7f1-9a67acbbbb4c-kube-api-access-kcmsq\") pod \"auto-csr-approver-29489778-pbg8c\" (UID: \"e2bb130b-2e77-4120-b7f1-9a67acbbbb4c\") " pod="openshift-infra/auto-csr-approver-29489778-pbg8c" Jan 26 00:18:00 crc kubenswrapper[5107]: I0126 00:18:00.302218 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcmsq\" (UniqueName: \"kubernetes.io/projected/e2bb130b-2e77-4120-b7f1-9a67acbbbb4c-kube-api-access-kcmsq\") pod \"auto-csr-approver-29489778-pbg8c\" (UID: \"e2bb130b-2e77-4120-b7f1-9a67acbbbb4c\") " pod="openshift-infra/auto-csr-approver-29489778-pbg8c" Jan 26 00:18:00 crc kubenswrapper[5107]: I0126 00:18:00.468990 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489778-pbg8c" Jan 26 00:18:00 crc kubenswrapper[5107]: I0126 00:18:00.740504 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489778-pbg8c"] Jan 26 00:18:01 crc kubenswrapper[5107]: I0126 00:18:01.290845 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489778-pbg8c" event={"ID":"e2bb130b-2e77-4120-b7f1-9a67acbbbb4c","Type":"ContainerStarted","Data":"3003e167b094f36e3a81af19b9b79216c0ce219db7f8c144ed8773ef238b77bd"} Jan 26 00:18:07 crc kubenswrapper[5107]: I0126 00:18:07.210471 5107 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-wrt9q" Jan 26 00:18:07 crc kubenswrapper[5107]: I0126 00:18:07.234911 5107 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-wrt9q" Jan 26 00:18:07 crc kubenswrapper[5107]: I0126 00:18:07.410672 5107 generic.go:358] "Generic (PLEG): container finished" podID="e2bb130b-2e77-4120-b7f1-9a67acbbbb4c" containerID="f7b2a467179547ad601467006c2bdf83998fecf2a87fe3837025efd6f8bef2f5" exitCode=0 Jan 26 00:18:07 crc kubenswrapper[5107]: I0126 00:18:07.410786 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489778-pbg8c" event={"ID":"e2bb130b-2e77-4120-b7f1-9a67acbbbb4c","Type":"ContainerDied","Data":"f7b2a467179547ad601467006c2bdf83998fecf2a87fe3837025efd6f8bef2f5"} Jan 26 00:18:08 crc kubenswrapper[5107]: I0126 00:18:08.237010 5107 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-25 00:13:07 +0000 UTC" deadline="2026-02-17 22:32:43.914468584 +0000 UTC" Jan 26 00:18:08 crc kubenswrapper[5107]: I0126 00:18:08.237068 5107 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="550h14m35.677405877s" Jan 26 00:18:08 crc kubenswrapper[5107]: I0126 00:18:08.655427 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489778-pbg8c" Jan 26 00:18:08 crc kubenswrapper[5107]: I0126 00:18:08.753585 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcmsq\" (UniqueName: \"kubernetes.io/projected/e2bb130b-2e77-4120-b7f1-9a67acbbbb4c-kube-api-access-kcmsq\") pod \"e2bb130b-2e77-4120-b7f1-9a67acbbbb4c\" (UID: \"e2bb130b-2e77-4120-b7f1-9a67acbbbb4c\") " Jan 26 00:18:08 crc kubenswrapper[5107]: I0126 00:18:08.765156 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2bb130b-2e77-4120-b7f1-9a67acbbbb4c-kube-api-access-kcmsq" (OuterVolumeSpecName: "kube-api-access-kcmsq") pod "e2bb130b-2e77-4120-b7f1-9a67acbbbb4c" (UID: "e2bb130b-2e77-4120-b7f1-9a67acbbbb4c"). InnerVolumeSpecName "kube-api-access-kcmsq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:18:08 crc kubenswrapper[5107]: I0126 00:18:08.855925 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kcmsq\" (UniqueName: \"kubernetes.io/projected/e2bb130b-2e77-4120-b7f1-9a67acbbbb4c-kube-api-access-kcmsq\") on node \"crc\" DevicePath \"\"" Jan 26 00:18:09 crc kubenswrapper[5107]: I0126 00:18:09.237875 5107 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-25 00:13:07 +0000 UTC" deadline="2026-02-17 04:18:04.999055319 +0000 UTC" Jan 26 00:18:09 crc kubenswrapper[5107]: I0126 00:18:09.237938 5107 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="531h59m55.761122607s" Jan 26 00:18:09 crc kubenswrapper[5107]: I0126 00:18:09.426782 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489778-pbg8c" event={"ID":"e2bb130b-2e77-4120-b7f1-9a67acbbbb4c","Type":"ContainerDied","Data":"3003e167b094f36e3a81af19b9b79216c0ce219db7f8c144ed8773ef238b77bd"} Jan 26 00:18:09 crc kubenswrapper[5107]: I0126 00:18:09.427106 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3003e167b094f36e3a81af19b9b79216c0ce219db7f8c144ed8773ef238b77bd" Jan 26 00:18:09 crc kubenswrapper[5107]: I0126 00:18:09.426812 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489778-pbg8c" Jan 26 00:19:16 crc kubenswrapper[5107]: I0126 00:19:16.798545 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Jan 26 00:19:16 crc kubenswrapper[5107]: I0126 00:19:16.808155 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 26 00:19:16 crc kubenswrapper[5107]: I0126 00:19:16.820865 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Jan 26 00:19:16 crc kubenswrapper[5107]: I0126 00:19:16.827663 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 26 00:20:00 crc kubenswrapper[5107]: I0126 00:20:00.146144 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489780-qn6md"] Jan 26 00:20:00 crc kubenswrapper[5107]: I0126 00:20:00.148256 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e2bb130b-2e77-4120-b7f1-9a67acbbbb4c" containerName="oc" Jan 26 00:20:00 crc kubenswrapper[5107]: I0126 00:20:00.148288 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2bb130b-2e77-4120-b7f1-9a67acbbbb4c" containerName="oc" Jan 26 00:20:00 crc kubenswrapper[5107]: I0126 00:20:00.148570 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="e2bb130b-2e77-4120-b7f1-9a67acbbbb4c" containerName="oc" Jan 26 00:20:00 crc kubenswrapper[5107]: I0126 00:20:00.679291 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489780-qn6md"] Jan 26 00:20:00 crc kubenswrapper[5107]: I0126 00:20:00.679509 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489780-qn6md" Jan 26 00:20:00 crc kubenswrapper[5107]: I0126 00:20:00.683558 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:20:00 crc kubenswrapper[5107]: I0126 00:20:00.686399 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-96gbq\"" Jan 26 00:20:00 crc kubenswrapper[5107]: I0126 00:20:00.686758 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:20:00 crc kubenswrapper[5107]: I0126 00:20:00.723396 5107 patch_prober.go:28] interesting pod/machine-config-daemon-94c4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:20:00 crc kubenswrapper[5107]: I0126 00:20:00.723875 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:20:00 crc kubenswrapper[5107]: I0126 00:20:00.772019 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wpgd\" (UniqueName: \"kubernetes.io/projected/52b61f87-e656-4450-af3e-26b5c5454e30-kube-api-access-9wpgd\") pod \"auto-csr-approver-29489780-qn6md\" (UID: \"52b61f87-e656-4450-af3e-26b5c5454e30\") " pod="openshift-infra/auto-csr-approver-29489780-qn6md" Jan 26 00:20:00 crc kubenswrapper[5107]: I0126 00:20:00.872814 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9wpgd\" (UniqueName: \"kubernetes.io/projected/52b61f87-e656-4450-af3e-26b5c5454e30-kube-api-access-9wpgd\") pod \"auto-csr-approver-29489780-qn6md\" (UID: \"52b61f87-e656-4450-af3e-26b5c5454e30\") " pod="openshift-infra/auto-csr-approver-29489780-qn6md" Jan 26 00:20:00 crc kubenswrapper[5107]: I0126 00:20:00.915396 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wpgd\" (UniqueName: \"kubernetes.io/projected/52b61f87-e656-4450-af3e-26b5c5454e30-kube-api-access-9wpgd\") pod \"auto-csr-approver-29489780-qn6md\" (UID: \"52b61f87-e656-4450-af3e-26b5c5454e30\") " pod="openshift-infra/auto-csr-approver-29489780-qn6md" Jan 26 00:20:01 crc kubenswrapper[5107]: I0126 00:20:01.015946 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489780-qn6md" Jan 26 00:20:01 crc kubenswrapper[5107]: I0126 00:20:01.351669 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489780-qn6md"] Jan 26 00:20:01 crc kubenswrapper[5107]: I0126 00:20:01.361709 5107 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 00:20:02 crc kubenswrapper[5107]: I0126 00:20:02.242165 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489780-qn6md" event={"ID":"52b61f87-e656-4450-af3e-26b5c5454e30","Type":"ContainerStarted","Data":"99c94448b4e4314036cc11c75d9c969030b4802431ed5f4ed9a49f4a92c708cd"} Jan 26 00:20:06 crc kubenswrapper[5107]: I0126 00:20:06.274604 5107 generic.go:358] "Generic (PLEG): container finished" podID="52b61f87-e656-4450-af3e-26b5c5454e30" containerID="465f540c310b393f6c1e985b7598732c9afc00a55e34d937da3aec18535e59db" exitCode=0 Jan 26 00:20:06 crc kubenswrapper[5107]: I0126 00:20:06.274724 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489780-qn6md" event={"ID":"52b61f87-e656-4450-af3e-26b5c5454e30","Type":"ContainerDied","Data":"465f540c310b393f6c1e985b7598732c9afc00a55e34d937da3aec18535e59db"} Jan 26 00:20:07 crc kubenswrapper[5107]: I0126 00:20:07.481715 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489780-qn6md" Jan 26 00:20:07 crc kubenswrapper[5107]: I0126 00:20:07.606063 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wpgd\" (UniqueName: \"kubernetes.io/projected/52b61f87-e656-4450-af3e-26b5c5454e30-kube-api-access-9wpgd\") pod \"52b61f87-e656-4450-af3e-26b5c5454e30\" (UID: \"52b61f87-e656-4450-af3e-26b5c5454e30\") " Jan 26 00:20:07 crc kubenswrapper[5107]: I0126 00:20:07.615184 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52b61f87-e656-4450-af3e-26b5c5454e30-kube-api-access-9wpgd" (OuterVolumeSpecName: "kube-api-access-9wpgd") pod "52b61f87-e656-4450-af3e-26b5c5454e30" (UID: "52b61f87-e656-4450-af3e-26b5c5454e30"). InnerVolumeSpecName "kube-api-access-9wpgd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:20:07 crc kubenswrapper[5107]: I0126 00:20:07.708221 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9wpgd\" (UniqueName: \"kubernetes.io/projected/52b61f87-e656-4450-af3e-26b5c5454e30-kube-api-access-9wpgd\") on node \"crc\" DevicePath \"\"" Jan 26 00:20:08 crc kubenswrapper[5107]: I0126 00:20:08.293081 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489780-qn6md" Jan 26 00:20:08 crc kubenswrapper[5107]: I0126 00:20:08.293084 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489780-qn6md" event={"ID":"52b61f87-e656-4450-af3e-26b5c5454e30","Type":"ContainerDied","Data":"99c94448b4e4314036cc11c75d9c969030b4802431ed5f4ed9a49f4a92c708cd"} Jan 26 00:20:08 crc kubenswrapper[5107]: I0126 00:20:08.293236 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99c94448b4e4314036cc11c75d9c969030b4802431ed5f4ed9a49f4a92c708cd" Jan 26 00:20:30 crc kubenswrapper[5107]: I0126 00:20:30.724296 5107 patch_prober.go:28] interesting pod/machine-config-daemon-94c4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:20:30 crc kubenswrapper[5107]: I0126 00:20:30.725225 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:21:00 crc kubenswrapper[5107]: I0126 00:21:00.724461 5107 patch_prober.go:28] interesting pod/machine-config-daemon-94c4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:21:00 crc kubenswrapper[5107]: I0126 00:21:00.725470 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:21:00 crc kubenswrapper[5107]: I0126 00:21:00.725565 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" Jan 26 00:21:00 crc kubenswrapper[5107]: I0126 00:21:00.726959 5107 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe8dfc3c3a0dc6bbcbdfa5c6d9274312703627715669e0a705943cc27e300da3"} pod="openshift-machine-config-operator/machine-config-daemon-94c4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:21:00 crc kubenswrapper[5107]: I0126 00:21:00.727213 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" containerID="cri-o://fe8dfc3c3a0dc6bbcbdfa5c6d9274312703627715669e0a705943cc27e300da3" gracePeriod=600 Jan 26 00:21:01 crc kubenswrapper[5107]: I0126 00:21:01.835631 5107 generic.go:358] "Generic (PLEG): container finished" podID="7d907601-1852-43f9-8a70-ef4e71351e81" containerID="fe8dfc3c3a0dc6bbcbdfa5c6d9274312703627715669e0a705943cc27e300da3" exitCode=0 Jan 26 00:21:01 crc kubenswrapper[5107]: I0126 00:21:01.835745 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" event={"ID":"7d907601-1852-43f9-8a70-ef4e71351e81","Type":"ContainerDied","Data":"fe8dfc3c3a0dc6bbcbdfa5c6d9274312703627715669e0a705943cc27e300da3"} Jan 26 00:21:01 crc kubenswrapper[5107]: I0126 00:21:01.836293 5107 scope.go:117] "RemoveContainer" containerID="e8533d9d343a82eee105ed10898832c472e05f5b38002db52b15945774cae6a3" Jan 26 00:21:02 crc kubenswrapper[5107]: I0126 00:21:02.847192 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" event={"ID":"7d907601-1852-43f9-8a70-ef4e71351e81","Type":"ContainerStarted","Data":"ed7fa55042f2cc4045dc49359ff131078dd30efec1ec5c7e0bdd12d2f213019e"} Jan 26 00:22:00 crc kubenswrapper[5107]: I0126 00:22:00.174641 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489782-nmsvm"] Jan 26 00:22:00 crc kubenswrapper[5107]: I0126 00:22:00.176460 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="52b61f87-e656-4450-af3e-26b5c5454e30" containerName="oc" Jan 26 00:22:00 crc kubenswrapper[5107]: I0126 00:22:00.176486 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b61f87-e656-4450-af3e-26b5c5454e30" containerName="oc" Jan 26 00:22:00 crc kubenswrapper[5107]: I0126 00:22:00.176663 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="52b61f87-e656-4450-af3e-26b5c5454e30" containerName="oc" Jan 26 00:22:00 crc kubenswrapper[5107]: I0126 00:22:00.183621 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489782-nmsvm" Jan 26 00:22:00 crc kubenswrapper[5107]: I0126 00:22:00.184181 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489782-nmsvm"] Jan 26 00:22:00 crc kubenswrapper[5107]: I0126 00:22:00.186051 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-96gbq\"" Jan 26 00:22:00 crc kubenswrapper[5107]: I0126 00:22:00.188101 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:22:00 crc kubenswrapper[5107]: I0126 00:22:00.190391 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:22:00 crc kubenswrapper[5107]: I0126 00:22:00.307475 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qgnq\" (UniqueName: \"kubernetes.io/projected/a4a83ec9-dea0-40e1-ba37-4eb4e2edb9ec-kube-api-access-2qgnq\") pod \"auto-csr-approver-29489782-nmsvm\" (UID: \"a4a83ec9-dea0-40e1-ba37-4eb4e2edb9ec\") " pod="openshift-infra/auto-csr-approver-29489782-nmsvm" Jan 26 00:22:00 crc kubenswrapper[5107]: I0126 00:22:00.409433 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2qgnq\" (UniqueName: \"kubernetes.io/projected/a4a83ec9-dea0-40e1-ba37-4eb4e2edb9ec-kube-api-access-2qgnq\") pod \"auto-csr-approver-29489782-nmsvm\" (UID: \"a4a83ec9-dea0-40e1-ba37-4eb4e2edb9ec\") " pod="openshift-infra/auto-csr-approver-29489782-nmsvm" Jan 26 00:22:00 crc kubenswrapper[5107]: I0126 00:22:00.434173 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qgnq\" (UniqueName: \"kubernetes.io/projected/a4a83ec9-dea0-40e1-ba37-4eb4e2edb9ec-kube-api-access-2qgnq\") pod \"auto-csr-approver-29489782-nmsvm\" (UID: \"a4a83ec9-dea0-40e1-ba37-4eb4e2edb9ec\") " pod="openshift-infra/auto-csr-approver-29489782-nmsvm" Jan 26 00:22:00 crc kubenswrapper[5107]: I0126 00:22:00.508817 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489782-nmsvm" Jan 26 00:22:00 crc kubenswrapper[5107]: I0126 00:22:00.771310 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489782-nmsvm"] Jan 26 00:22:01 crc kubenswrapper[5107]: I0126 00:22:01.283251 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489782-nmsvm" event={"ID":"a4a83ec9-dea0-40e1-ba37-4eb4e2edb9ec","Type":"ContainerStarted","Data":"c6d5147aa3938dc112bab651fbba2a0d1f3f3fc968997a2a2d25c9e932a6ffc6"} Jan 26 00:22:02 crc kubenswrapper[5107]: I0126 00:22:02.300075 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489782-nmsvm" event={"ID":"a4a83ec9-dea0-40e1-ba37-4eb4e2edb9ec","Type":"ContainerStarted","Data":"1475fcefbeb042c17674098c08eba950b93da7effbc00aaeaf7f6a7b87cff919"} Jan 26 00:22:02 crc kubenswrapper[5107]: I0126 00:22:02.325837 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29489782-nmsvm" podStartSLOduration=1.278739168 podStartE2EDuration="2.325801337s" podCreationTimestamp="2026-01-26 00:22:00 +0000 UTC" firstStartedPulling="2026-01-26 00:22:00.785167589 +0000 UTC m=+765.702761935" lastFinishedPulling="2026-01-26 00:22:01.832229738 +0000 UTC m=+766.749824104" observedRunningTime="2026-01-26 00:22:02.318789033 +0000 UTC m=+767.236383379" watchObservedRunningTime="2026-01-26 00:22:02.325801337 +0000 UTC m=+767.243395703" Jan 26 00:22:02 crc kubenswrapper[5107]: E0126 00:22:02.506220 5107 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4a83ec9_dea0_40e1_ba37_4eb4e2edb9ec.slice/crio-conmon-1475fcefbeb042c17674098c08eba950b93da7effbc00aaeaf7f6a7b87cff919.scope\": RecentStats: unable to find data in memory cache]" Jan 26 00:22:03 crc kubenswrapper[5107]: I0126 00:22:03.309316 5107 generic.go:358] "Generic (PLEG): container finished" podID="a4a83ec9-dea0-40e1-ba37-4eb4e2edb9ec" containerID="1475fcefbeb042c17674098c08eba950b93da7effbc00aaeaf7f6a7b87cff919" exitCode=0 Jan 26 00:22:03 crc kubenswrapper[5107]: I0126 00:22:03.309426 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489782-nmsvm" event={"ID":"a4a83ec9-dea0-40e1-ba37-4eb4e2edb9ec","Type":"ContainerDied","Data":"1475fcefbeb042c17674098c08eba950b93da7effbc00aaeaf7f6a7b87cff919"} Jan 26 00:22:04 crc kubenswrapper[5107]: I0126 00:22:04.567853 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489782-nmsvm" Jan 26 00:22:04 crc kubenswrapper[5107]: I0126 00:22:04.674417 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qgnq\" (UniqueName: \"kubernetes.io/projected/a4a83ec9-dea0-40e1-ba37-4eb4e2edb9ec-kube-api-access-2qgnq\") pod \"a4a83ec9-dea0-40e1-ba37-4eb4e2edb9ec\" (UID: \"a4a83ec9-dea0-40e1-ba37-4eb4e2edb9ec\") " Jan 26 00:22:04 crc kubenswrapper[5107]: I0126 00:22:04.683270 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4a83ec9-dea0-40e1-ba37-4eb4e2edb9ec-kube-api-access-2qgnq" (OuterVolumeSpecName: "kube-api-access-2qgnq") pod "a4a83ec9-dea0-40e1-ba37-4eb4e2edb9ec" (UID: "a4a83ec9-dea0-40e1-ba37-4eb4e2edb9ec"). InnerVolumeSpecName "kube-api-access-2qgnq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:22:04 crc kubenswrapper[5107]: I0126 00:22:04.775875 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2qgnq\" (UniqueName: \"kubernetes.io/projected/a4a83ec9-dea0-40e1-ba37-4eb4e2edb9ec-kube-api-access-2qgnq\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:05 crc kubenswrapper[5107]: I0126 00:22:05.327595 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489782-nmsvm" event={"ID":"a4a83ec9-dea0-40e1-ba37-4eb4e2edb9ec","Type":"ContainerDied","Data":"c6d5147aa3938dc112bab651fbba2a0d1f3f3fc968997a2a2d25c9e932a6ffc6"} Jan 26 00:22:05 crc kubenswrapper[5107]: I0126 00:22:05.328080 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6d5147aa3938dc112bab651fbba2a0d1f3f3fc968997a2a2d25c9e932a6ffc6" Jan 26 00:22:05 crc kubenswrapper[5107]: I0126 00:22:05.327609 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489782-nmsvm" Jan 26 00:22:09 crc kubenswrapper[5107]: I0126 00:22:09.938898 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn"] Jan 26 00:22:09 crc kubenswrapper[5107]: I0126 00:22:09.940134 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" podUID="ec13f4fa-c252-4f6a-9a31-43f70366ae48" containerName="kube-rbac-proxy" containerID="cri-o://278a16c98dd11167e9a1c7d0851eac90113bcf9aeda2aa7628d1d0ac6ad6ec60" gracePeriod=30 Jan 26 00:22:09 crc kubenswrapper[5107]: I0126 00:22:09.940234 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" podUID="ec13f4fa-c252-4f6a-9a31-43f70366ae48" containerName="ovnkube-cluster-manager" containerID="cri-o://09cf3f70d300e3ac7e3df79f5dc1360a09542552aab2a9a0f740255d5e671e32" gracePeriod=30 Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.181347 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nvznv"] Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.182081 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="nbdb" containerID="cri-o://232f849813a1d424da2e7596712c5dda8da9c73e44d49ee01ec000f2b14132db" gracePeriod=30 Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.182252 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="northd" containerID="cri-o://e7eb56451f4e409d4fa1dfd0c69d38e6d43fe5c4dc0cae8908d364b3dce0e4eb" gracePeriod=30 Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.182315 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="ovn-acl-logging" containerID="cri-o://ec04ec9e5194c0682a9a154223e66c1963b4ee0d234f3caa24c0e1901caea55c" gracePeriod=30 Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.182289 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="kube-rbac-proxy-node" containerID="cri-o://2e36ae47cf4b659b6fc689c141ea8a385139feeb69d144308493c4bd123dea9c" gracePeriod=30 Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.182371 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="sbdb" containerID="cri-o://ee08ffbc86db13f1cc4efa26fb4361ac81d024c5931eafb0c463eb9adbd02ae4" gracePeriod=30 Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.182124 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://37cee4666015f0f68030c5480638195a022b8a11aa1f62a9ad196309182af9e2" gracePeriod=30 Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.182016 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="ovn-controller" containerID="cri-o://de732b9903d7b08c68b9df371201978109d26eaebad3de3ffd9963f118455a26" gracePeriod=30 Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.245284 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="ovnkube-controller" containerID="cri-o://a9ab2a653b2b73d826c9ddea0b68582c394418fa92ab46bb0c7d4eda8b3812f5" gracePeriod=30 Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.392997 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nvznv_d12cfb26-8718-4def-8f36-c7eaa12bc463/ovn-acl-logging/0.log" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.393936 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nvznv_d12cfb26-8718-4def-8f36-c7eaa12bc463/ovn-controller/0.log" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.394372 5107 generic.go:358] "Generic (PLEG): container finished" podID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerID="37cee4666015f0f68030c5480638195a022b8a11aa1f62a9ad196309182af9e2" exitCode=0 Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.394411 5107 generic.go:358] "Generic (PLEG): container finished" podID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerID="2e36ae47cf4b659b6fc689c141ea8a385139feeb69d144308493c4bd123dea9c" exitCode=0 Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.394421 5107 generic.go:358] "Generic (PLEG): container finished" podID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerID="ec04ec9e5194c0682a9a154223e66c1963b4ee0d234f3caa24c0e1901caea55c" exitCode=143 Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.394431 5107 generic.go:358] "Generic (PLEG): container finished" podID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerID="de732b9903d7b08c68b9df371201978109d26eaebad3de3ffd9963f118455a26" exitCode=143 Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.394547 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" event={"ID":"d12cfb26-8718-4def-8f36-c7eaa12bc463","Type":"ContainerDied","Data":"37cee4666015f0f68030c5480638195a022b8a11aa1f62a9ad196309182af9e2"} Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.394599 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" event={"ID":"d12cfb26-8718-4def-8f36-c7eaa12bc463","Type":"ContainerDied","Data":"2e36ae47cf4b659b6fc689c141ea8a385139feeb69d144308493c4bd123dea9c"} Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.394615 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" event={"ID":"d12cfb26-8718-4def-8f36-c7eaa12bc463","Type":"ContainerDied","Data":"ec04ec9e5194c0682a9a154223e66c1963b4ee0d234f3caa24c0e1901caea55c"} Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.394632 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" event={"ID":"d12cfb26-8718-4def-8f36-c7eaa12bc463","Type":"ContainerDied","Data":"de732b9903d7b08c68b9df371201978109d26eaebad3de3ffd9963f118455a26"} Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.399100 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f2mpq_2e5342d5-2d0c-458d-94b7-25c802ce298a/kube-multus/0.log" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.399150 5107 generic.go:358] "Generic (PLEG): container finished" podID="2e5342d5-2d0c-458d-94b7-25c802ce298a" containerID="0af05be8661681d1cc4310b5d003875b708d55167f48758b301cbd8b2fa6aad8" exitCode=2 Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.399206 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-f2mpq" event={"ID":"2e5342d5-2d0c-458d-94b7-25c802ce298a","Type":"ContainerDied","Data":"0af05be8661681d1cc4310b5d003875b708d55167f48758b301cbd8b2fa6aad8"} Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.400117 5107 scope.go:117] "RemoveContainer" containerID="0af05be8661681d1cc4310b5d003875b708d55167f48758b301cbd8b2fa6aad8" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.401834 5107 generic.go:358] "Generic (PLEG): container finished" podID="ec13f4fa-c252-4f6a-9a31-43f70366ae48" containerID="09cf3f70d300e3ac7e3df79f5dc1360a09542552aab2a9a0f740255d5e671e32" exitCode=0 Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.401871 5107 generic.go:358] "Generic (PLEG): container finished" podID="ec13f4fa-c252-4f6a-9a31-43f70366ae48" containerID="278a16c98dd11167e9a1c7d0851eac90113bcf9aeda2aa7628d1d0ac6ad6ec60" exitCode=0 Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.402159 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" event={"ID":"ec13f4fa-c252-4f6a-9a31-43f70366ae48","Type":"ContainerDied","Data":"09cf3f70d300e3ac7e3df79f5dc1360a09542552aab2a9a0f740255d5e671e32"} Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.402197 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" event={"ID":"ec13f4fa-c252-4f6a-9a31-43f70366ae48","Type":"ContainerDied","Data":"278a16c98dd11167e9a1c7d0851eac90113bcf9aeda2aa7628d1d0ac6ad6ec60"} Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.736991 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.768693 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qzjmh"] Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.769458 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ec13f4fa-c252-4f6a-9a31-43f70366ae48" containerName="ovnkube-cluster-manager" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.769485 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec13f4fa-c252-4f6a-9a31-43f70366ae48" containerName="ovnkube-cluster-manager" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.769515 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4a83ec9-dea0-40e1-ba37-4eb4e2edb9ec" containerName="oc" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.769525 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4a83ec9-dea0-40e1-ba37-4eb4e2edb9ec" containerName="oc" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.769536 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ec13f4fa-c252-4f6a-9a31-43f70366ae48" containerName="kube-rbac-proxy" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.769545 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec13f4fa-c252-4f6a-9a31-43f70366ae48" containerName="kube-rbac-proxy" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.769690 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="ec13f4fa-c252-4f6a-9a31-43f70366ae48" containerName="ovnkube-cluster-manager" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.769715 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="ec13f4fa-c252-4f6a-9a31-43f70366ae48" containerName="kube-rbac-proxy" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.769726 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="a4a83ec9-dea0-40e1-ba37-4eb4e2edb9ec" containerName="oc" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.773518 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qzjmh" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.792840 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm2qk\" (UniqueName: \"kubernetes.io/projected/ec13f4fa-c252-4f6a-9a31-43f70366ae48-kube-api-access-nm2qk\") pod \"ec13f4fa-c252-4f6a-9a31-43f70366ae48\" (UID: \"ec13f4fa-c252-4f6a-9a31-43f70366ae48\") " Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.794963 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ec13f4fa-c252-4f6a-9a31-43f70366ae48-ovnkube-config\") pod \"ec13f4fa-c252-4f6a-9a31-43f70366ae48\" (UID: \"ec13f4fa-c252-4f6a-9a31-43f70366ae48\") " Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.795169 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ec13f4fa-c252-4f6a-9a31-43f70366ae48-ovn-control-plane-metrics-cert\") pod \"ec13f4fa-c252-4f6a-9a31-43f70366ae48\" (UID: \"ec13f4fa-c252-4f6a-9a31-43f70366ae48\") " Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.795249 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ec13f4fa-c252-4f6a-9a31-43f70366ae48-env-overrides\") pod \"ec13f4fa-c252-4f6a-9a31-43f70366ae48\" (UID: \"ec13f4fa-c252-4f6a-9a31-43f70366ae48\") " Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.796190 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec13f4fa-c252-4f6a-9a31-43f70366ae48-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "ec13f4fa-c252-4f6a-9a31-43f70366ae48" (UID: "ec13f4fa-c252-4f6a-9a31-43f70366ae48"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.796214 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec13f4fa-c252-4f6a-9a31-43f70366ae48-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "ec13f4fa-c252-4f6a-9a31-43f70366ae48" (UID: "ec13f4fa-c252-4f6a-9a31-43f70366ae48"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.796351 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-682bh\" (UniqueName: \"kubernetes.io/projected/504ba906-9364-4a51-a3f6-812ff5c459fb-kube-api-access-682bh\") pod \"ovnkube-control-plane-97c9b6c48-qzjmh\" (UID: \"504ba906-9364-4a51-a3f6-812ff5c459fb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qzjmh" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.796409 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/504ba906-9364-4a51-a3f6-812ff5c459fb-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-qzjmh\" (UID: \"504ba906-9364-4a51-a3f6-812ff5c459fb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qzjmh" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.796451 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/504ba906-9364-4a51-a3f6-812ff5c459fb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-qzjmh\" (UID: \"504ba906-9364-4a51-a3f6-812ff5c459fb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qzjmh" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.796504 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/504ba906-9364-4a51-a3f6-812ff5c459fb-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-qzjmh\" (UID: \"504ba906-9364-4a51-a3f6-812ff5c459fb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qzjmh" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.796600 5107 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ec13f4fa-c252-4f6a-9a31-43f70366ae48-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.796618 5107 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ec13f4fa-c252-4f6a-9a31-43f70366ae48-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.803353 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec13f4fa-c252-4f6a-9a31-43f70366ae48-kube-api-access-nm2qk" (OuterVolumeSpecName: "kube-api-access-nm2qk") pod "ec13f4fa-c252-4f6a-9a31-43f70366ae48" (UID: "ec13f4fa-c252-4f6a-9a31-43f70366ae48"). InnerVolumeSpecName "kube-api-access-nm2qk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.803390 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec13f4fa-c252-4f6a-9a31-43f70366ae48-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "ec13f4fa-c252-4f6a-9a31-43f70366ae48" (UID: "ec13f4fa-c252-4f6a-9a31-43f70366ae48"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.897927 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/504ba906-9364-4a51-a3f6-812ff5c459fb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-qzjmh\" (UID: \"504ba906-9364-4a51-a3f6-812ff5c459fb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qzjmh" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.897990 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/504ba906-9364-4a51-a3f6-812ff5c459fb-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-qzjmh\" (UID: \"504ba906-9364-4a51-a3f6-812ff5c459fb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qzjmh" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.898054 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-682bh\" (UniqueName: \"kubernetes.io/projected/504ba906-9364-4a51-a3f6-812ff5c459fb-kube-api-access-682bh\") pod \"ovnkube-control-plane-97c9b6c48-qzjmh\" (UID: \"504ba906-9364-4a51-a3f6-812ff5c459fb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qzjmh" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.898111 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/504ba906-9364-4a51-a3f6-812ff5c459fb-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-qzjmh\" (UID: \"504ba906-9364-4a51-a3f6-812ff5c459fb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qzjmh" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.898174 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nm2qk\" (UniqueName: \"kubernetes.io/projected/ec13f4fa-c252-4f6a-9a31-43f70366ae48-kube-api-access-nm2qk\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.898185 5107 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ec13f4fa-c252-4f6a-9a31-43f70366ae48-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.898759 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/504ba906-9364-4a51-a3f6-812ff5c459fb-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-qzjmh\" (UID: \"504ba906-9364-4a51-a3f6-812ff5c459fb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qzjmh" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.899142 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/504ba906-9364-4a51-a3f6-812ff5c459fb-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-qzjmh\" (UID: \"504ba906-9364-4a51-a3f6-812ff5c459fb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qzjmh" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.901909 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/504ba906-9364-4a51-a3f6-812ff5c459fb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-qzjmh\" (UID: \"504ba906-9364-4a51-a3f6-812ff5c459fb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qzjmh" Jan 26 00:22:10 crc kubenswrapper[5107]: I0126 00:22:10.921222 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-682bh\" (UniqueName: \"kubernetes.io/projected/504ba906-9364-4a51-a3f6-812ff5c459fb-kube-api-access-682bh\") pod \"ovnkube-control-plane-97c9b6c48-qzjmh\" (UID: \"504ba906-9364-4a51-a3f6-812ff5c459fb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qzjmh" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.029938 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nvznv_d12cfb26-8718-4def-8f36-c7eaa12bc463/ovn-acl-logging/0.log" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.030470 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nvznv_d12cfb26-8718-4def-8f36-c7eaa12bc463/ovn-controller/0.log" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.030958 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.098767 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qzjmh" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.100662 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bm9q\" (UniqueName: \"kubernetes.io/projected/d12cfb26-8718-4def-8f36-c7eaa12bc463-kube-api-access-9bm9q\") pod \"d12cfb26-8718-4def-8f36-c7eaa12bc463\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.100714 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-etc-openvswitch\") pod \"d12cfb26-8718-4def-8f36-c7eaa12bc463\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.100751 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-systemd-units\") pod \"d12cfb26-8718-4def-8f36-c7eaa12bc463\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.100784 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-run-ovn-kubernetes\") pod \"d12cfb26-8718-4def-8f36-c7eaa12bc463\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.100820 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d12cfb26-8718-4def-8f36-c7eaa12bc463-env-overrides\") pod \"d12cfb26-8718-4def-8f36-c7eaa12bc463\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.100852 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d12cfb26-8718-4def-8f36-c7eaa12bc463-ovnkube-script-lib\") pod \"d12cfb26-8718-4def-8f36-c7eaa12bc463\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.100872 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-run-netns\") pod \"d12cfb26-8718-4def-8f36-c7eaa12bc463\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.100927 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-var-lib-openvswitch\") pod \"d12cfb26-8718-4def-8f36-c7eaa12bc463\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.100985 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-kubelet\") pod \"d12cfb26-8718-4def-8f36-c7eaa12bc463\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.101051 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-slash\") pod \"d12cfb26-8718-4def-8f36-c7eaa12bc463\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.101075 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d12cfb26-8718-4def-8f36-c7eaa12bc463-ovnkube-config\") pod \"d12cfb26-8718-4def-8f36-c7eaa12bc463\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.101118 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d12cfb26-8718-4def-8f36-c7eaa12bc463-ovn-node-metrics-cert\") pod \"d12cfb26-8718-4def-8f36-c7eaa12bc463\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.101143 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-run-ovn\") pod \"d12cfb26-8718-4def-8f36-c7eaa12bc463\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.101200 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-run-systemd\") pod \"d12cfb26-8718-4def-8f36-c7eaa12bc463\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.101228 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-log-socket\") pod \"d12cfb26-8718-4def-8f36-c7eaa12bc463\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.101281 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-cni-netd\") pod \"d12cfb26-8718-4def-8f36-c7eaa12bc463\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.101314 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-var-lib-cni-networks-ovn-kubernetes\") pod \"d12cfb26-8718-4def-8f36-c7eaa12bc463\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.101347 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-run-openvswitch\") pod \"d12cfb26-8718-4def-8f36-c7eaa12bc463\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.101371 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-node-log\") pod \"d12cfb26-8718-4def-8f36-c7eaa12bc463\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.101395 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-cni-bin\") pod \"d12cfb26-8718-4def-8f36-c7eaa12bc463\" (UID: \"d12cfb26-8718-4def-8f36-c7eaa12bc463\") " Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.101823 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "d12cfb26-8718-4def-8f36-c7eaa12bc463" (UID: "d12cfb26-8718-4def-8f36-c7eaa12bc463"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.101916 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "d12cfb26-8718-4def-8f36-c7eaa12bc463" (UID: "d12cfb26-8718-4def-8f36-c7eaa12bc463"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.101914 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-log-socket" (OuterVolumeSpecName: "log-socket") pod "d12cfb26-8718-4def-8f36-c7eaa12bc463" (UID: "d12cfb26-8718-4def-8f36-c7eaa12bc463"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.101966 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "d12cfb26-8718-4def-8f36-c7eaa12bc463" (UID: "d12cfb26-8718-4def-8f36-c7eaa12bc463"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.101936 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-slash" (OuterVolumeSpecName: "host-slash") pod "d12cfb26-8718-4def-8f36-c7eaa12bc463" (UID: "d12cfb26-8718-4def-8f36-c7eaa12bc463"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.102028 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "d12cfb26-8718-4def-8f36-c7eaa12bc463" (UID: "d12cfb26-8718-4def-8f36-c7eaa12bc463"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.102069 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-node-log" (OuterVolumeSpecName: "node-log") pod "d12cfb26-8718-4def-8f36-c7eaa12bc463" (UID: "d12cfb26-8718-4def-8f36-c7eaa12bc463"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.102102 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "d12cfb26-8718-4def-8f36-c7eaa12bc463" (UID: "d12cfb26-8718-4def-8f36-c7eaa12bc463"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.103185 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d12cfb26-8718-4def-8f36-c7eaa12bc463-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "d12cfb26-8718-4def-8f36-c7eaa12bc463" (UID: "d12cfb26-8718-4def-8f36-c7eaa12bc463"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.104624 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ptknl"] Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.104709 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d12cfb26-8718-4def-8f36-c7eaa12bc463-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "d12cfb26-8718-4def-8f36-c7eaa12bc463" (UID: "d12cfb26-8718-4def-8f36-c7eaa12bc463"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.104785 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "d12cfb26-8718-4def-8f36-c7eaa12bc463" (UID: "d12cfb26-8718-4def-8f36-c7eaa12bc463"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.104840 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "d12cfb26-8718-4def-8f36-c7eaa12bc463" (UID: "d12cfb26-8718-4def-8f36-c7eaa12bc463"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.104912 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "d12cfb26-8718-4def-8f36-c7eaa12bc463" (UID: "d12cfb26-8718-4def-8f36-c7eaa12bc463"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.104979 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "d12cfb26-8718-4def-8f36-c7eaa12bc463" (UID: "d12cfb26-8718-4def-8f36-c7eaa12bc463"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.105629 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d12cfb26-8718-4def-8f36-c7eaa12bc463-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "d12cfb26-8718-4def-8f36-c7eaa12bc463" (UID: "d12cfb26-8718-4def-8f36-c7eaa12bc463"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.105794 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "d12cfb26-8718-4def-8f36-c7eaa12bc463" (UID: "d12cfb26-8718-4def-8f36-c7eaa12bc463"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.105863 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "d12cfb26-8718-4def-8f36-c7eaa12bc463" (UID: "d12cfb26-8718-4def-8f36-c7eaa12bc463"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.111428 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="ovn-controller" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.111482 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="ovn-controller" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.111497 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="ovn-acl-logging" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.111502 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="ovn-acl-logging" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.111515 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="nbdb" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.111520 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="nbdb" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.111536 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.111546 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.111570 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="northd" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.111577 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="northd" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.111613 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="kube-rbac-proxy-node" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.111624 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="kube-rbac-proxy-node" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.111635 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="sbdb" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.111645 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="sbdb" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.111658 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="kubecfg-setup" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.111665 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="kubecfg-setup" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.111681 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="ovnkube-controller" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.111693 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="ovnkube-controller" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.111971 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="nbdb" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.111984 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.111997 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="northd" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.112014 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="sbdb" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.112025 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="ovn-controller" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.112036 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="ovnkube-controller" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.112046 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="ovn-acl-logging" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.112054 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerName="kube-rbac-proxy-node" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.117951 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d12cfb26-8718-4def-8f36-c7eaa12bc463-kube-api-access-9bm9q" (OuterVolumeSpecName: "kube-api-access-9bm9q") pod "d12cfb26-8718-4def-8f36-c7eaa12bc463" (UID: "d12cfb26-8718-4def-8f36-c7eaa12bc463"). InnerVolumeSpecName "kube-api-access-9bm9q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.118600 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d12cfb26-8718-4def-8f36-c7eaa12bc463-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "d12cfb26-8718-4def-8f36-c7eaa12bc463" (UID: "d12cfb26-8718-4def-8f36-c7eaa12bc463"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.122518 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: W0126 00:22:11.131537 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod504ba906_9364_4a51_a3f6_812ff5c459fb.slice/crio-03979459d32bbef4d663e7e46dde8470b5e035438228c4e5491bd8d816dfddfb WatchSource:0}: Error finding container 03979459d32bbef4d663e7e46dde8470b5e035438228c4e5491bd8d816dfddfb: Status 404 returned error can't find the container with id 03979459d32bbef4d663e7e46dde8470b5e035438228c4e5491bd8d816dfddfb Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.134136 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "d12cfb26-8718-4def-8f36-c7eaa12bc463" (UID: "d12cfb26-8718-4def-8f36-c7eaa12bc463"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.202748 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-host-kubelet\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.202800 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0eaea537-ba57-468d-ba3d-8a67d6a0affe-ovnkube-config\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.202829 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.202908 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-run-systemd\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.202963 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-log-socket\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.202989 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-var-lib-openvswitch\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.203010 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-etc-openvswitch\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.203029 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-systemd-units\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.203058 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-host-run-ovn-kubernetes\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.203356 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-host-run-netns\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.203423 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-host-cni-bin\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.203520 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-run-ovn\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.203551 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0eaea537-ba57-468d-ba3d-8a67d6a0affe-env-overrides\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.203603 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0eaea537-ba57-468d-ba3d-8a67d6a0affe-ovn-node-metrics-cert\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.203648 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0eaea537-ba57-468d-ba3d-8a67d6a0affe-ovnkube-script-lib\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.203681 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6x6f\" (UniqueName: \"kubernetes.io/projected/0eaea537-ba57-468d-ba3d-8a67d6a0affe-kube-api-access-v6x6f\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.203725 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-run-openvswitch\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.203795 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-node-log\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.203870 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-host-cni-netd\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.203982 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-host-slash\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.204145 5107 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-node-log\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.204168 5107 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.204178 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9bm9q\" (UniqueName: \"kubernetes.io/projected/d12cfb26-8718-4def-8f36-c7eaa12bc463-kube-api-access-9bm9q\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.204188 5107 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.204197 5107 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.204205 5107 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.204216 5107 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d12cfb26-8718-4def-8f36-c7eaa12bc463-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.204224 5107 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d12cfb26-8718-4def-8f36-c7eaa12bc463-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.204233 5107 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.204242 5107 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.204249 5107 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.204257 5107 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-slash\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.204265 5107 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d12cfb26-8718-4def-8f36-c7eaa12bc463-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.204274 5107 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d12cfb26-8718-4def-8f36-c7eaa12bc463-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.204283 5107 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.204295 5107 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.204304 5107 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-log-socket\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.204311 5107 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.204320 5107 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.204331 5107 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d12cfb26-8718-4def-8f36-c7eaa12bc463-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.323420 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0eaea537-ba57-468d-ba3d-8a67d6a0affe-ovnkube-script-lib\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.323505 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v6x6f\" (UniqueName: \"kubernetes.io/projected/0eaea537-ba57-468d-ba3d-8a67d6a0affe-kube-api-access-v6x6f\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.323536 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-run-openvswitch\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.323560 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-node-log\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.323591 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-host-cni-netd\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.323645 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-host-slash\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.323674 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-host-kubelet\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.323694 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0eaea537-ba57-468d-ba3d-8a67d6a0affe-ovnkube-config\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.323716 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.323739 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-run-systemd\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.323770 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-log-socket\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.323793 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-var-lib-openvswitch\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.323819 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-etc-openvswitch\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.323842 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-systemd-units\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.323871 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-host-run-ovn-kubernetes\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.323918 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-host-run-netns\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.323942 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-host-cni-bin\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.323977 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-run-ovn\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.323995 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0eaea537-ba57-468d-ba3d-8a67d6a0affe-env-overrides\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.324014 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0eaea537-ba57-468d-ba3d-8a67d6a0affe-ovn-node-metrics-cert\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.325011 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-run-systemd\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.325298 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-host-slash\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.325333 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-host-kubelet\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.325444 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-host-cni-netd\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.326134 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-var-lib-openvswitch\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.326345 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-host-run-ovn-kubernetes\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.326383 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-host-cni-bin\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.326425 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-host-run-netns\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.326427 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-etc-openvswitch\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.326459 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-systemd-units\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.326469 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-run-openvswitch\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.326494 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-run-ovn\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.326560 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0eaea537-ba57-468d-ba3d-8a67d6a0affe-ovnkube-config\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.326627 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-node-log\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.326617 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-log-socket\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.326859 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0eaea537-ba57-468d-ba3d-8a67d6a0affe-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.327050 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0eaea537-ba57-468d-ba3d-8a67d6a0affe-ovnkube-script-lib\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.327401 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0eaea537-ba57-468d-ba3d-8a67d6a0affe-env-overrides\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.331988 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0eaea537-ba57-468d-ba3d-8a67d6a0affe-ovn-node-metrics-cert\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.347996 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6x6f\" (UniqueName: \"kubernetes.io/projected/0eaea537-ba57-468d-ba3d-8a67d6a0affe-kube-api-access-v6x6f\") pod \"ovnkube-node-ptknl\" (UID: \"0eaea537-ba57-468d-ba3d-8a67d6a0affe\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.415433 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nvznv_d12cfb26-8718-4def-8f36-c7eaa12bc463/ovn-acl-logging/0.log" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.416103 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nvznv_d12cfb26-8718-4def-8f36-c7eaa12bc463/ovn-controller/0.log" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.416709 5107 generic.go:358] "Generic (PLEG): container finished" podID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerID="a9ab2a653b2b73d826c9ddea0b68582c394418fa92ab46bb0c7d4eda8b3812f5" exitCode=0 Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.416763 5107 generic.go:358] "Generic (PLEG): container finished" podID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerID="ee08ffbc86db13f1cc4efa26fb4361ac81d024c5931eafb0c463eb9adbd02ae4" exitCode=0 Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.416776 5107 generic.go:358] "Generic (PLEG): container finished" podID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerID="232f849813a1d424da2e7596712c5dda8da9c73e44d49ee01ec000f2b14132db" exitCode=0 Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.416787 5107 generic.go:358] "Generic (PLEG): container finished" podID="d12cfb26-8718-4def-8f36-c7eaa12bc463" containerID="e7eb56451f4e409d4fa1dfd0c69d38e6d43fe5c4dc0cae8908d364b3dce0e4eb" exitCode=0 Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.416838 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" event={"ID":"d12cfb26-8718-4def-8f36-c7eaa12bc463","Type":"ContainerDied","Data":"a9ab2a653b2b73d826c9ddea0b68582c394418fa92ab46bb0c7d4eda8b3812f5"} Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.416910 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.416959 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" event={"ID":"d12cfb26-8718-4def-8f36-c7eaa12bc463","Type":"ContainerDied","Data":"ee08ffbc86db13f1cc4efa26fb4361ac81d024c5931eafb0c463eb9adbd02ae4"} Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.416986 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" event={"ID":"d12cfb26-8718-4def-8f36-c7eaa12bc463","Type":"ContainerDied","Data":"232f849813a1d424da2e7596712c5dda8da9c73e44d49ee01ec000f2b14132db"} Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.416998 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" event={"ID":"d12cfb26-8718-4def-8f36-c7eaa12bc463","Type":"ContainerDied","Data":"e7eb56451f4e409d4fa1dfd0c69d38e6d43fe5c4dc0cae8908d364b3dce0e4eb"} Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.417013 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nvznv" event={"ID":"d12cfb26-8718-4def-8f36-c7eaa12bc463","Type":"ContainerDied","Data":"5a71931c9f6b4da462548b6468f1ae63256b59a3616870102e815d45c9040a1c"} Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.417015 5107 scope.go:117] "RemoveContainer" containerID="a9ab2a653b2b73d826c9ddea0b68582c394418fa92ab46bb0c7d4eda8b3812f5" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.422650 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f2mpq_2e5342d5-2d0c-458d-94b7-25c802ce298a/kube-multus/0.log" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.422816 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-f2mpq" event={"ID":"2e5342d5-2d0c-458d-94b7-25c802ce298a","Type":"ContainerStarted","Data":"5610fb9f513de036a675bae0170911d087d9a0321e9a30cd7c7cac51525dd401"} Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.432695 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" event={"ID":"ec13f4fa-c252-4f6a-9a31-43f70366ae48","Type":"ContainerDied","Data":"5fd9d33e7f51a3f529e4963e034176ddf70e34e2fdfa54f0ba68a3a217cae605"} Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.433049 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.445421 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qzjmh" event={"ID":"504ba906-9364-4a51-a3f6-812ff5c459fb","Type":"ContainerStarted","Data":"03979459d32bbef4d663e7e46dde8470b5e035438228c4e5491bd8d816dfddfb"} Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.461371 5107 scope.go:117] "RemoveContainer" containerID="ee08ffbc86db13f1cc4efa26fb4361ac81d024c5931eafb0c463eb9adbd02ae4" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.480009 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn"] Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.484705 5107 scope.go:117] "RemoveContainer" containerID="232f849813a1d424da2e7596712c5dda8da9c73e44d49ee01ec000f2b14132db" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.485459 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-kcwjn"] Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.501622 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nvznv"] Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.504543 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nvznv"] Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.508988 5107 scope.go:117] "RemoveContainer" containerID="e7eb56451f4e409d4fa1dfd0c69d38e6d43fe5c4dc0cae8908d364b3dce0e4eb" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.523545 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.527853 5107 scope.go:117] "RemoveContainer" containerID="37cee4666015f0f68030c5480638195a022b8a11aa1f62a9ad196309182af9e2" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.544550 5107 scope.go:117] "RemoveContainer" containerID="2e36ae47cf4b659b6fc689c141ea8a385139feeb69d144308493c4bd123dea9c" Jan 26 00:22:11 crc kubenswrapper[5107]: W0126 00:22:11.555051 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0eaea537_ba57_468d_ba3d_8a67d6a0affe.slice/crio-9a94a7b7f752d87166575e36d7bf4bf8ca2d4113fe02e89d483c5b46cd4cb0dc WatchSource:0}: Error finding container 9a94a7b7f752d87166575e36d7bf4bf8ca2d4113fe02e89d483c5b46cd4cb0dc: Status 404 returned error can't find the container with id 9a94a7b7f752d87166575e36d7bf4bf8ca2d4113fe02e89d483c5b46cd4cb0dc Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.561193 5107 scope.go:117] "RemoveContainer" containerID="ec04ec9e5194c0682a9a154223e66c1963b4ee0d234f3caa24c0e1901caea55c" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.579731 5107 scope.go:117] "RemoveContainer" containerID="de732b9903d7b08c68b9df371201978109d26eaebad3de3ffd9963f118455a26" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.599341 5107 scope.go:117] "RemoveContainer" containerID="48490e24f72bfc85170134defa73f6607b7f49b3e04b249cb1993647c0168748" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.614534 5107 scope.go:117] "RemoveContainer" containerID="a9ab2a653b2b73d826c9ddea0b68582c394418fa92ab46bb0c7d4eda8b3812f5" Jan 26 00:22:11 crc kubenswrapper[5107]: E0126 00:22:11.615203 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9ab2a653b2b73d826c9ddea0b68582c394418fa92ab46bb0c7d4eda8b3812f5\": container with ID starting with a9ab2a653b2b73d826c9ddea0b68582c394418fa92ab46bb0c7d4eda8b3812f5 not found: ID does not exist" containerID="a9ab2a653b2b73d826c9ddea0b68582c394418fa92ab46bb0c7d4eda8b3812f5" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.615362 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9ab2a653b2b73d826c9ddea0b68582c394418fa92ab46bb0c7d4eda8b3812f5"} err="failed to get container status \"a9ab2a653b2b73d826c9ddea0b68582c394418fa92ab46bb0c7d4eda8b3812f5\": rpc error: code = NotFound desc = could not find container \"a9ab2a653b2b73d826c9ddea0b68582c394418fa92ab46bb0c7d4eda8b3812f5\": container with ID starting with a9ab2a653b2b73d826c9ddea0b68582c394418fa92ab46bb0c7d4eda8b3812f5 not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.615441 5107 scope.go:117] "RemoveContainer" containerID="ee08ffbc86db13f1cc4efa26fb4361ac81d024c5931eafb0c463eb9adbd02ae4" Jan 26 00:22:11 crc kubenswrapper[5107]: E0126 00:22:11.615969 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee08ffbc86db13f1cc4efa26fb4361ac81d024c5931eafb0c463eb9adbd02ae4\": container with ID starting with ee08ffbc86db13f1cc4efa26fb4361ac81d024c5931eafb0c463eb9adbd02ae4 not found: ID does not exist" containerID="ee08ffbc86db13f1cc4efa26fb4361ac81d024c5931eafb0c463eb9adbd02ae4" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.616078 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee08ffbc86db13f1cc4efa26fb4361ac81d024c5931eafb0c463eb9adbd02ae4"} err="failed to get container status \"ee08ffbc86db13f1cc4efa26fb4361ac81d024c5931eafb0c463eb9adbd02ae4\": rpc error: code = NotFound desc = could not find container \"ee08ffbc86db13f1cc4efa26fb4361ac81d024c5931eafb0c463eb9adbd02ae4\": container with ID starting with ee08ffbc86db13f1cc4efa26fb4361ac81d024c5931eafb0c463eb9adbd02ae4 not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.616114 5107 scope.go:117] "RemoveContainer" containerID="232f849813a1d424da2e7596712c5dda8da9c73e44d49ee01ec000f2b14132db" Jan 26 00:22:11 crc kubenswrapper[5107]: E0126 00:22:11.616592 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"232f849813a1d424da2e7596712c5dda8da9c73e44d49ee01ec000f2b14132db\": container with ID starting with 232f849813a1d424da2e7596712c5dda8da9c73e44d49ee01ec000f2b14132db not found: ID does not exist" containerID="232f849813a1d424da2e7596712c5dda8da9c73e44d49ee01ec000f2b14132db" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.616617 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"232f849813a1d424da2e7596712c5dda8da9c73e44d49ee01ec000f2b14132db"} err="failed to get container status \"232f849813a1d424da2e7596712c5dda8da9c73e44d49ee01ec000f2b14132db\": rpc error: code = NotFound desc = could not find container \"232f849813a1d424da2e7596712c5dda8da9c73e44d49ee01ec000f2b14132db\": container with ID starting with 232f849813a1d424da2e7596712c5dda8da9c73e44d49ee01ec000f2b14132db not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.616632 5107 scope.go:117] "RemoveContainer" containerID="e7eb56451f4e409d4fa1dfd0c69d38e6d43fe5c4dc0cae8908d364b3dce0e4eb" Jan 26 00:22:11 crc kubenswrapper[5107]: E0126 00:22:11.616981 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7eb56451f4e409d4fa1dfd0c69d38e6d43fe5c4dc0cae8908d364b3dce0e4eb\": container with ID starting with e7eb56451f4e409d4fa1dfd0c69d38e6d43fe5c4dc0cae8908d364b3dce0e4eb not found: ID does not exist" containerID="e7eb56451f4e409d4fa1dfd0c69d38e6d43fe5c4dc0cae8908d364b3dce0e4eb" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.617008 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7eb56451f4e409d4fa1dfd0c69d38e6d43fe5c4dc0cae8908d364b3dce0e4eb"} err="failed to get container status \"e7eb56451f4e409d4fa1dfd0c69d38e6d43fe5c4dc0cae8908d364b3dce0e4eb\": rpc error: code = NotFound desc = could not find container \"e7eb56451f4e409d4fa1dfd0c69d38e6d43fe5c4dc0cae8908d364b3dce0e4eb\": container with ID starting with e7eb56451f4e409d4fa1dfd0c69d38e6d43fe5c4dc0cae8908d364b3dce0e4eb not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.617021 5107 scope.go:117] "RemoveContainer" containerID="37cee4666015f0f68030c5480638195a022b8a11aa1f62a9ad196309182af9e2" Jan 26 00:22:11 crc kubenswrapper[5107]: E0126 00:22:11.617453 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37cee4666015f0f68030c5480638195a022b8a11aa1f62a9ad196309182af9e2\": container with ID starting with 37cee4666015f0f68030c5480638195a022b8a11aa1f62a9ad196309182af9e2 not found: ID does not exist" containerID="37cee4666015f0f68030c5480638195a022b8a11aa1f62a9ad196309182af9e2" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.617467 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37cee4666015f0f68030c5480638195a022b8a11aa1f62a9ad196309182af9e2"} err="failed to get container status \"37cee4666015f0f68030c5480638195a022b8a11aa1f62a9ad196309182af9e2\": rpc error: code = NotFound desc = could not find container \"37cee4666015f0f68030c5480638195a022b8a11aa1f62a9ad196309182af9e2\": container with ID starting with 37cee4666015f0f68030c5480638195a022b8a11aa1f62a9ad196309182af9e2 not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.617479 5107 scope.go:117] "RemoveContainer" containerID="2e36ae47cf4b659b6fc689c141ea8a385139feeb69d144308493c4bd123dea9c" Jan 26 00:22:11 crc kubenswrapper[5107]: E0126 00:22:11.617812 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e36ae47cf4b659b6fc689c141ea8a385139feeb69d144308493c4bd123dea9c\": container with ID starting with 2e36ae47cf4b659b6fc689c141ea8a385139feeb69d144308493c4bd123dea9c not found: ID does not exist" containerID="2e36ae47cf4b659b6fc689c141ea8a385139feeb69d144308493c4bd123dea9c" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.617832 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e36ae47cf4b659b6fc689c141ea8a385139feeb69d144308493c4bd123dea9c"} err="failed to get container status \"2e36ae47cf4b659b6fc689c141ea8a385139feeb69d144308493c4bd123dea9c\": rpc error: code = NotFound desc = could not find container \"2e36ae47cf4b659b6fc689c141ea8a385139feeb69d144308493c4bd123dea9c\": container with ID starting with 2e36ae47cf4b659b6fc689c141ea8a385139feeb69d144308493c4bd123dea9c not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.617847 5107 scope.go:117] "RemoveContainer" containerID="ec04ec9e5194c0682a9a154223e66c1963b4ee0d234f3caa24c0e1901caea55c" Jan 26 00:22:11 crc kubenswrapper[5107]: E0126 00:22:11.618402 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec04ec9e5194c0682a9a154223e66c1963b4ee0d234f3caa24c0e1901caea55c\": container with ID starting with ec04ec9e5194c0682a9a154223e66c1963b4ee0d234f3caa24c0e1901caea55c not found: ID does not exist" containerID="ec04ec9e5194c0682a9a154223e66c1963b4ee0d234f3caa24c0e1901caea55c" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.618430 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec04ec9e5194c0682a9a154223e66c1963b4ee0d234f3caa24c0e1901caea55c"} err="failed to get container status \"ec04ec9e5194c0682a9a154223e66c1963b4ee0d234f3caa24c0e1901caea55c\": rpc error: code = NotFound desc = could not find container \"ec04ec9e5194c0682a9a154223e66c1963b4ee0d234f3caa24c0e1901caea55c\": container with ID starting with ec04ec9e5194c0682a9a154223e66c1963b4ee0d234f3caa24c0e1901caea55c not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.618444 5107 scope.go:117] "RemoveContainer" containerID="de732b9903d7b08c68b9df371201978109d26eaebad3de3ffd9963f118455a26" Jan 26 00:22:11 crc kubenswrapper[5107]: E0126 00:22:11.618915 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de732b9903d7b08c68b9df371201978109d26eaebad3de3ffd9963f118455a26\": container with ID starting with de732b9903d7b08c68b9df371201978109d26eaebad3de3ffd9963f118455a26 not found: ID does not exist" containerID="de732b9903d7b08c68b9df371201978109d26eaebad3de3ffd9963f118455a26" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.618937 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de732b9903d7b08c68b9df371201978109d26eaebad3de3ffd9963f118455a26"} err="failed to get container status \"de732b9903d7b08c68b9df371201978109d26eaebad3de3ffd9963f118455a26\": rpc error: code = NotFound desc = could not find container \"de732b9903d7b08c68b9df371201978109d26eaebad3de3ffd9963f118455a26\": container with ID starting with de732b9903d7b08c68b9df371201978109d26eaebad3de3ffd9963f118455a26 not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.618948 5107 scope.go:117] "RemoveContainer" containerID="48490e24f72bfc85170134defa73f6607b7f49b3e04b249cb1993647c0168748" Jan 26 00:22:11 crc kubenswrapper[5107]: E0126 00:22:11.619321 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48490e24f72bfc85170134defa73f6607b7f49b3e04b249cb1993647c0168748\": container with ID starting with 48490e24f72bfc85170134defa73f6607b7f49b3e04b249cb1993647c0168748 not found: ID does not exist" containerID="48490e24f72bfc85170134defa73f6607b7f49b3e04b249cb1993647c0168748" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.619374 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48490e24f72bfc85170134defa73f6607b7f49b3e04b249cb1993647c0168748"} err="failed to get container status \"48490e24f72bfc85170134defa73f6607b7f49b3e04b249cb1993647c0168748\": rpc error: code = NotFound desc = could not find container \"48490e24f72bfc85170134defa73f6607b7f49b3e04b249cb1993647c0168748\": container with ID starting with 48490e24f72bfc85170134defa73f6607b7f49b3e04b249cb1993647c0168748 not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.619407 5107 scope.go:117] "RemoveContainer" containerID="a9ab2a653b2b73d826c9ddea0b68582c394418fa92ab46bb0c7d4eda8b3812f5" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.619941 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9ab2a653b2b73d826c9ddea0b68582c394418fa92ab46bb0c7d4eda8b3812f5"} err="failed to get container status \"a9ab2a653b2b73d826c9ddea0b68582c394418fa92ab46bb0c7d4eda8b3812f5\": rpc error: code = NotFound desc = could not find container \"a9ab2a653b2b73d826c9ddea0b68582c394418fa92ab46bb0c7d4eda8b3812f5\": container with ID starting with a9ab2a653b2b73d826c9ddea0b68582c394418fa92ab46bb0c7d4eda8b3812f5 not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.619957 5107 scope.go:117] "RemoveContainer" containerID="ee08ffbc86db13f1cc4efa26fb4361ac81d024c5931eafb0c463eb9adbd02ae4" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.620380 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee08ffbc86db13f1cc4efa26fb4361ac81d024c5931eafb0c463eb9adbd02ae4"} err="failed to get container status \"ee08ffbc86db13f1cc4efa26fb4361ac81d024c5931eafb0c463eb9adbd02ae4\": rpc error: code = NotFound desc = could not find container \"ee08ffbc86db13f1cc4efa26fb4361ac81d024c5931eafb0c463eb9adbd02ae4\": container with ID starting with ee08ffbc86db13f1cc4efa26fb4361ac81d024c5931eafb0c463eb9adbd02ae4 not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.620403 5107 scope.go:117] "RemoveContainer" containerID="232f849813a1d424da2e7596712c5dda8da9c73e44d49ee01ec000f2b14132db" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.620717 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"232f849813a1d424da2e7596712c5dda8da9c73e44d49ee01ec000f2b14132db"} err="failed to get container status \"232f849813a1d424da2e7596712c5dda8da9c73e44d49ee01ec000f2b14132db\": rpc error: code = NotFound desc = could not find container \"232f849813a1d424da2e7596712c5dda8da9c73e44d49ee01ec000f2b14132db\": container with ID starting with 232f849813a1d424da2e7596712c5dda8da9c73e44d49ee01ec000f2b14132db not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.620757 5107 scope.go:117] "RemoveContainer" containerID="e7eb56451f4e409d4fa1dfd0c69d38e6d43fe5c4dc0cae8908d364b3dce0e4eb" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.621127 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7eb56451f4e409d4fa1dfd0c69d38e6d43fe5c4dc0cae8908d364b3dce0e4eb"} err="failed to get container status \"e7eb56451f4e409d4fa1dfd0c69d38e6d43fe5c4dc0cae8908d364b3dce0e4eb\": rpc error: code = NotFound desc = could not find container \"e7eb56451f4e409d4fa1dfd0c69d38e6d43fe5c4dc0cae8908d364b3dce0e4eb\": container with ID starting with e7eb56451f4e409d4fa1dfd0c69d38e6d43fe5c4dc0cae8908d364b3dce0e4eb not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.621192 5107 scope.go:117] "RemoveContainer" containerID="37cee4666015f0f68030c5480638195a022b8a11aa1f62a9ad196309182af9e2" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.622232 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37cee4666015f0f68030c5480638195a022b8a11aa1f62a9ad196309182af9e2"} err="failed to get container status \"37cee4666015f0f68030c5480638195a022b8a11aa1f62a9ad196309182af9e2\": rpc error: code = NotFound desc = could not find container \"37cee4666015f0f68030c5480638195a022b8a11aa1f62a9ad196309182af9e2\": container with ID starting with 37cee4666015f0f68030c5480638195a022b8a11aa1f62a9ad196309182af9e2 not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.622258 5107 scope.go:117] "RemoveContainer" containerID="2e36ae47cf4b659b6fc689c141ea8a385139feeb69d144308493c4bd123dea9c" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.622482 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e36ae47cf4b659b6fc689c141ea8a385139feeb69d144308493c4bd123dea9c"} err="failed to get container status \"2e36ae47cf4b659b6fc689c141ea8a385139feeb69d144308493c4bd123dea9c\": rpc error: code = NotFound desc = could not find container \"2e36ae47cf4b659b6fc689c141ea8a385139feeb69d144308493c4bd123dea9c\": container with ID starting with 2e36ae47cf4b659b6fc689c141ea8a385139feeb69d144308493c4bd123dea9c not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.622512 5107 scope.go:117] "RemoveContainer" containerID="ec04ec9e5194c0682a9a154223e66c1963b4ee0d234f3caa24c0e1901caea55c" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.622713 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec04ec9e5194c0682a9a154223e66c1963b4ee0d234f3caa24c0e1901caea55c"} err="failed to get container status \"ec04ec9e5194c0682a9a154223e66c1963b4ee0d234f3caa24c0e1901caea55c\": rpc error: code = NotFound desc = could not find container \"ec04ec9e5194c0682a9a154223e66c1963b4ee0d234f3caa24c0e1901caea55c\": container with ID starting with ec04ec9e5194c0682a9a154223e66c1963b4ee0d234f3caa24c0e1901caea55c not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.622735 5107 scope.go:117] "RemoveContainer" containerID="de732b9903d7b08c68b9df371201978109d26eaebad3de3ffd9963f118455a26" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.623079 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de732b9903d7b08c68b9df371201978109d26eaebad3de3ffd9963f118455a26"} err="failed to get container status \"de732b9903d7b08c68b9df371201978109d26eaebad3de3ffd9963f118455a26\": rpc error: code = NotFound desc = could not find container \"de732b9903d7b08c68b9df371201978109d26eaebad3de3ffd9963f118455a26\": container with ID starting with de732b9903d7b08c68b9df371201978109d26eaebad3de3ffd9963f118455a26 not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.623112 5107 scope.go:117] "RemoveContainer" containerID="48490e24f72bfc85170134defa73f6607b7f49b3e04b249cb1993647c0168748" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.623322 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48490e24f72bfc85170134defa73f6607b7f49b3e04b249cb1993647c0168748"} err="failed to get container status \"48490e24f72bfc85170134defa73f6607b7f49b3e04b249cb1993647c0168748\": rpc error: code = NotFound desc = could not find container \"48490e24f72bfc85170134defa73f6607b7f49b3e04b249cb1993647c0168748\": container with ID starting with 48490e24f72bfc85170134defa73f6607b7f49b3e04b249cb1993647c0168748 not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.623416 5107 scope.go:117] "RemoveContainer" containerID="a9ab2a653b2b73d826c9ddea0b68582c394418fa92ab46bb0c7d4eda8b3812f5" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.623694 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9ab2a653b2b73d826c9ddea0b68582c394418fa92ab46bb0c7d4eda8b3812f5"} err="failed to get container status \"a9ab2a653b2b73d826c9ddea0b68582c394418fa92ab46bb0c7d4eda8b3812f5\": rpc error: code = NotFound desc = could not find container \"a9ab2a653b2b73d826c9ddea0b68582c394418fa92ab46bb0c7d4eda8b3812f5\": container with ID starting with a9ab2a653b2b73d826c9ddea0b68582c394418fa92ab46bb0c7d4eda8b3812f5 not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.623721 5107 scope.go:117] "RemoveContainer" containerID="ee08ffbc86db13f1cc4efa26fb4361ac81d024c5931eafb0c463eb9adbd02ae4" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.623978 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee08ffbc86db13f1cc4efa26fb4361ac81d024c5931eafb0c463eb9adbd02ae4"} err="failed to get container status \"ee08ffbc86db13f1cc4efa26fb4361ac81d024c5931eafb0c463eb9adbd02ae4\": rpc error: code = NotFound desc = could not find container \"ee08ffbc86db13f1cc4efa26fb4361ac81d024c5931eafb0c463eb9adbd02ae4\": container with ID starting with ee08ffbc86db13f1cc4efa26fb4361ac81d024c5931eafb0c463eb9adbd02ae4 not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.624001 5107 scope.go:117] "RemoveContainer" containerID="232f849813a1d424da2e7596712c5dda8da9c73e44d49ee01ec000f2b14132db" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.624253 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"232f849813a1d424da2e7596712c5dda8da9c73e44d49ee01ec000f2b14132db"} err="failed to get container status \"232f849813a1d424da2e7596712c5dda8da9c73e44d49ee01ec000f2b14132db\": rpc error: code = NotFound desc = could not find container \"232f849813a1d424da2e7596712c5dda8da9c73e44d49ee01ec000f2b14132db\": container with ID starting with 232f849813a1d424da2e7596712c5dda8da9c73e44d49ee01ec000f2b14132db not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.624282 5107 scope.go:117] "RemoveContainer" containerID="e7eb56451f4e409d4fa1dfd0c69d38e6d43fe5c4dc0cae8908d364b3dce0e4eb" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.624530 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7eb56451f4e409d4fa1dfd0c69d38e6d43fe5c4dc0cae8908d364b3dce0e4eb"} err="failed to get container status \"e7eb56451f4e409d4fa1dfd0c69d38e6d43fe5c4dc0cae8908d364b3dce0e4eb\": rpc error: code = NotFound desc = could not find container \"e7eb56451f4e409d4fa1dfd0c69d38e6d43fe5c4dc0cae8908d364b3dce0e4eb\": container with ID starting with e7eb56451f4e409d4fa1dfd0c69d38e6d43fe5c4dc0cae8908d364b3dce0e4eb not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.624552 5107 scope.go:117] "RemoveContainer" containerID="37cee4666015f0f68030c5480638195a022b8a11aa1f62a9ad196309182af9e2" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.624784 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37cee4666015f0f68030c5480638195a022b8a11aa1f62a9ad196309182af9e2"} err="failed to get container status \"37cee4666015f0f68030c5480638195a022b8a11aa1f62a9ad196309182af9e2\": rpc error: code = NotFound desc = could not find container \"37cee4666015f0f68030c5480638195a022b8a11aa1f62a9ad196309182af9e2\": container with ID starting with 37cee4666015f0f68030c5480638195a022b8a11aa1f62a9ad196309182af9e2 not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.624810 5107 scope.go:117] "RemoveContainer" containerID="2e36ae47cf4b659b6fc689c141ea8a385139feeb69d144308493c4bd123dea9c" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.625135 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e36ae47cf4b659b6fc689c141ea8a385139feeb69d144308493c4bd123dea9c"} err="failed to get container status \"2e36ae47cf4b659b6fc689c141ea8a385139feeb69d144308493c4bd123dea9c\": rpc error: code = NotFound desc = could not find container \"2e36ae47cf4b659b6fc689c141ea8a385139feeb69d144308493c4bd123dea9c\": container with ID starting with 2e36ae47cf4b659b6fc689c141ea8a385139feeb69d144308493c4bd123dea9c not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.625165 5107 scope.go:117] "RemoveContainer" containerID="ec04ec9e5194c0682a9a154223e66c1963b4ee0d234f3caa24c0e1901caea55c" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.625430 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec04ec9e5194c0682a9a154223e66c1963b4ee0d234f3caa24c0e1901caea55c"} err="failed to get container status \"ec04ec9e5194c0682a9a154223e66c1963b4ee0d234f3caa24c0e1901caea55c\": rpc error: code = NotFound desc = could not find container \"ec04ec9e5194c0682a9a154223e66c1963b4ee0d234f3caa24c0e1901caea55c\": container with ID starting with ec04ec9e5194c0682a9a154223e66c1963b4ee0d234f3caa24c0e1901caea55c not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.625458 5107 scope.go:117] "RemoveContainer" containerID="de732b9903d7b08c68b9df371201978109d26eaebad3de3ffd9963f118455a26" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.625801 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de732b9903d7b08c68b9df371201978109d26eaebad3de3ffd9963f118455a26"} err="failed to get container status \"de732b9903d7b08c68b9df371201978109d26eaebad3de3ffd9963f118455a26\": rpc error: code = NotFound desc = could not find container \"de732b9903d7b08c68b9df371201978109d26eaebad3de3ffd9963f118455a26\": container with ID starting with de732b9903d7b08c68b9df371201978109d26eaebad3de3ffd9963f118455a26 not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.625834 5107 scope.go:117] "RemoveContainer" containerID="48490e24f72bfc85170134defa73f6607b7f49b3e04b249cb1993647c0168748" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.626133 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48490e24f72bfc85170134defa73f6607b7f49b3e04b249cb1993647c0168748"} err="failed to get container status \"48490e24f72bfc85170134defa73f6607b7f49b3e04b249cb1993647c0168748\": rpc error: code = NotFound desc = could not find container \"48490e24f72bfc85170134defa73f6607b7f49b3e04b249cb1993647c0168748\": container with ID starting with 48490e24f72bfc85170134defa73f6607b7f49b3e04b249cb1993647c0168748 not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.626160 5107 scope.go:117] "RemoveContainer" containerID="a9ab2a653b2b73d826c9ddea0b68582c394418fa92ab46bb0c7d4eda8b3812f5" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.626423 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9ab2a653b2b73d826c9ddea0b68582c394418fa92ab46bb0c7d4eda8b3812f5"} err="failed to get container status \"a9ab2a653b2b73d826c9ddea0b68582c394418fa92ab46bb0c7d4eda8b3812f5\": rpc error: code = NotFound desc = could not find container \"a9ab2a653b2b73d826c9ddea0b68582c394418fa92ab46bb0c7d4eda8b3812f5\": container with ID starting with a9ab2a653b2b73d826c9ddea0b68582c394418fa92ab46bb0c7d4eda8b3812f5 not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.626448 5107 scope.go:117] "RemoveContainer" containerID="ee08ffbc86db13f1cc4efa26fb4361ac81d024c5931eafb0c463eb9adbd02ae4" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.626856 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee08ffbc86db13f1cc4efa26fb4361ac81d024c5931eafb0c463eb9adbd02ae4"} err="failed to get container status \"ee08ffbc86db13f1cc4efa26fb4361ac81d024c5931eafb0c463eb9adbd02ae4\": rpc error: code = NotFound desc = could not find container \"ee08ffbc86db13f1cc4efa26fb4361ac81d024c5931eafb0c463eb9adbd02ae4\": container with ID starting with ee08ffbc86db13f1cc4efa26fb4361ac81d024c5931eafb0c463eb9adbd02ae4 not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.626871 5107 scope.go:117] "RemoveContainer" containerID="232f849813a1d424da2e7596712c5dda8da9c73e44d49ee01ec000f2b14132db" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.627204 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"232f849813a1d424da2e7596712c5dda8da9c73e44d49ee01ec000f2b14132db"} err="failed to get container status \"232f849813a1d424da2e7596712c5dda8da9c73e44d49ee01ec000f2b14132db\": rpc error: code = NotFound desc = could not find container \"232f849813a1d424da2e7596712c5dda8da9c73e44d49ee01ec000f2b14132db\": container with ID starting with 232f849813a1d424da2e7596712c5dda8da9c73e44d49ee01ec000f2b14132db not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.627230 5107 scope.go:117] "RemoveContainer" containerID="e7eb56451f4e409d4fa1dfd0c69d38e6d43fe5c4dc0cae8908d364b3dce0e4eb" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.627601 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7eb56451f4e409d4fa1dfd0c69d38e6d43fe5c4dc0cae8908d364b3dce0e4eb"} err="failed to get container status \"e7eb56451f4e409d4fa1dfd0c69d38e6d43fe5c4dc0cae8908d364b3dce0e4eb\": rpc error: code = NotFound desc = could not find container \"e7eb56451f4e409d4fa1dfd0c69d38e6d43fe5c4dc0cae8908d364b3dce0e4eb\": container with ID starting with e7eb56451f4e409d4fa1dfd0c69d38e6d43fe5c4dc0cae8908d364b3dce0e4eb not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.627646 5107 scope.go:117] "RemoveContainer" containerID="37cee4666015f0f68030c5480638195a022b8a11aa1f62a9ad196309182af9e2" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.627953 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37cee4666015f0f68030c5480638195a022b8a11aa1f62a9ad196309182af9e2"} err="failed to get container status \"37cee4666015f0f68030c5480638195a022b8a11aa1f62a9ad196309182af9e2\": rpc error: code = NotFound desc = could not find container \"37cee4666015f0f68030c5480638195a022b8a11aa1f62a9ad196309182af9e2\": container with ID starting with 37cee4666015f0f68030c5480638195a022b8a11aa1f62a9ad196309182af9e2 not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.627979 5107 scope.go:117] "RemoveContainer" containerID="2e36ae47cf4b659b6fc689c141ea8a385139feeb69d144308493c4bd123dea9c" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.628204 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e36ae47cf4b659b6fc689c141ea8a385139feeb69d144308493c4bd123dea9c"} err="failed to get container status \"2e36ae47cf4b659b6fc689c141ea8a385139feeb69d144308493c4bd123dea9c\": rpc error: code = NotFound desc = could not find container \"2e36ae47cf4b659b6fc689c141ea8a385139feeb69d144308493c4bd123dea9c\": container with ID starting with 2e36ae47cf4b659b6fc689c141ea8a385139feeb69d144308493c4bd123dea9c not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.628226 5107 scope.go:117] "RemoveContainer" containerID="ec04ec9e5194c0682a9a154223e66c1963b4ee0d234f3caa24c0e1901caea55c" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.628501 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec04ec9e5194c0682a9a154223e66c1963b4ee0d234f3caa24c0e1901caea55c"} err="failed to get container status \"ec04ec9e5194c0682a9a154223e66c1963b4ee0d234f3caa24c0e1901caea55c\": rpc error: code = NotFound desc = could not find container \"ec04ec9e5194c0682a9a154223e66c1963b4ee0d234f3caa24c0e1901caea55c\": container with ID starting with ec04ec9e5194c0682a9a154223e66c1963b4ee0d234f3caa24c0e1901caea55c not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.628540 5107 scope.go:117] "RemoveContainer" containerID="de732b9903d7b08c68b9df371201978109d26eaebad3de3ffd9963f118455a26" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.628792 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de732b9903d7b08c68b9df371201978109d26eaebad3de3ffd9963f118455a26"} err="failed to get container status \"de732b9903d7b08c68b9df371201978109d26eaebad3de3ffd9963f118455a26\": rpc error: code = NotFound desc = could not find container \"de732b9903d7b08c68b9df371201978109d26eaebad3de3ffd9963f118455a26\": container with ID starting with de732b9903d7b08c68b9df371201978109d26eaebad3de3ffd9963f118455a26 not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.628824 5107 scope.go:117] "RemoveContainer" containerID="48490e24f72bfc85170134defa73f6607b7f49b3e04b249cb1993647c0168748" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.629079 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48490e24f72bfc85170134defa73f6607b7f49b3e04b249cb1993647c0168748"} err="failed to get container status \"48490e24f72bfc85170134defa73f6607b7f49b3e04b249cb1993647c0168748\": rpc error: code = NotFound desc = could not find container \"48490e24f72bfc85170134defa73f6607b7f49b3e04b249cb1993647c0168748\": container with ID starting with 48490e24f72bfc85170134defa73f6607b7f49b3e04b249cb1993647c0168748 not found: ID does not exist" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.629108 5107 scope.go:117] "RemoveContainer" containerID="09cf3f70d300e3ac7e3df79f5dc1360a09542552aab2a9a0f740255d5e671e32" Jan 26 00:22:11 crc kubenswrapper[5107]: I0126 00:22:11.649353 5107 scope.go:117] "RemoveContainer" containerID="278a16c98dd11167e9a1c7d0851eac90113bcf9aeda2aa7628d1d0ac6ad6ec60" Jan 26 00:22:12 crc kubenswrapper[5107]: I0126 00:22:12.121712 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d12cfb26-8718-4def-8f36-c7eaa12bc463" path="/var/lib/kubelet/pods/d12cfb26-8718-4def-8f36-c7eaa12bc463/volumes" Jan 26 00:22:12 crc kubenswrapper[5107]: I0126 00:22:12.123343 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec13f4fa-c252-4f6a-9a31-43f70366ae48" path="/var/lib/kubelet/pods/ec13f4fa-c252-4f6a-9a31-43f70366ae48/volumes" Jan 26 00:22:12 crc kubenswrapper[5107]: I0126 00:22:12.451491 5107 generic.go:358] "Generic (PLEG): container finished" podID="0eaea537-ba57-468d-ba3d-8a67d6a0affe" containerID="a2c1306f4fecdd213e9ea9cadc55ddd937fb2bc448b6a167804397db30187858" exitCode=0 Jan 26 00:22:12 crc kubenswrapper[5107]: I0126 00:22:12.451599 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" event={"ID":"0eaea537-ba57-468d-ba3d-8a67d6a0affe","Type":"ContainerDied","Data":"a2c1306f4fecdd213e9ea9cadc55ddd937fb2bc448b6a167804397db30187858"} Jan 26 00:22:12 crc kubenswrapper[5107]: I0126 00:22:12.451664 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" event={"ID":"0eaea537-ba57-468d-ba3d-8a67d6a0affe","Type":"ContainerStarted","Data":"9a94a7b7f752d87166575e36d7bf4bf8ca2d4113fe02e89d483c5b46cd4cb0dc"} Jan 26 00:22:12 crc kubenswrapper[5107]: I0126 00:22:12.456046 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qzjmh" event={"ID":"504ba906-9364-4a51-a3f6-812ff5c459fb","Type":"ContainerStarted","Data":"b1e8e28777feae9e0b1574e89b8f5607891cee559eb809ff496ed42de2143cb4"} Jan 26 00:22:13 crc kubenswrapper[5107]: I0126 00:22:13.468981 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qzjmh" event={"ID":"504ba906-9364-4a51-a3f6-812ff5c459fb","Type":"ContainerStarted","Data":"f95f1ff570d4983b996c89d688a425d1c77351f8807f56b9ad2acc6340a95082"} Jan 26 00:22:13 crc kubenswrapper[5107]: I0126 00:22:13.503857 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qzjmh" podStartSLOduration=4.503837809 podStartE2EDuration="4.503837809s" podCreationTimestamp="2026-01-26 00:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:22:13.500027008 +0000 UTC m=+778.417621374" watchObservedRunningTime="2026-01-26 00:22:13.503837809 +0000 UTC m=+778.421432155" Jan 26 00:22:14 crc kubenswrapper[5107]: I0126 00:22:14.478488 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" event={"ID":"0eaea537-ba57-468d-ba3d-8a67d6a0affe","Type":"ContainerStarted","Data":"9631ff220283f01192d676bfa4d1e843469292fab37ce16dbc1c42785c7ea391"} Jan 26 00:22:15 crc kubenswrapper[5107]: I0126 00:22:15.490102 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" event={"ID":"0eaea537-ba57-468d-ba3d-8a67d6a0affe","Type":"ContainerStarted","Data":"5a56eefacdbd4d15d3ac104b0bbfb8d333fca376df712e431f0481c61ecfb151"} Jan 26 00:22:15 crc kubenswrapper[5107]: I0126 00:22:15.490165 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" event={"ID":"0eaea537-ba57-468d-ba3d-8a67d6a0affe","Type":"ContainerStarted","Data":"34d8bfbc0f614349f4ed990a9ea0c720bdb92a9d39c059262b617d767b3ce004"} Jan 26 00:22:15 crc kubenswrapper[5107]: I0126 00:22:15.490179 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" event={"ID":"0eaea537-ba57-468d-ba3d-8a67d6a0affe","Type":"ContainerStarted","Data":"769cba636b32f0d3a319fd9490b0700484242b2d49c9d1dac8a5e70dd5f1e292"} Jan 26 00:22:15 crc kubenswrapper[5107]: I0126 00:22:15.490191 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" event={"ID":"0eaea537-ba57-468d-ba3d-8a67d6a0affe","Type":"ContainerStarted","Data":"96779b761a8b404fcbc3ad29ec0616235949c0014c2e3dfde478ea050ae14386"} Jan 26 00:22:15 crc kubenswrapper[5107]: I0126 00:22:15.490201 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" event={"ID":"0eaea537-ba57-468d-ba3d-8a67d6a0affe","Type":"ContainerStarted","Data":"0104a27be127b954a8faa2f5aaffb2d5fad63e868195cf38efe54a2b242259d0"} Jan 26 00:22:21 crc kubenswrapper[5107]: I0126 00:22:21.536834 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" event={"ID":"0eaea537-ba57-468d-ba3d-8a67d6a0affe","Type":"ContainerStarted","Data":"f2b50c9ad85896ba368f8ee7550ea6c69115f368a016da16e55621ddca9a4a66"} Jan 26 00:22:25 crc kubenswrapper[5107]: I0126 00:22:25.671280 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" event={"ID":"0eaea537-ba57-468d-ba3d-8a67d6a0affe","Type":"ContainerStarted","Data":"26d472675b286f5d7c9995096dcebf24a34afc9b56c7a1991f384b93bc535551"} Jan 26 00:22:25 crc kubenswrapper[5107]: I0126 00:22:25.671865 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:25 crc kubenswrapper[5107]: I0126 00:22:25.672078 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:25 crc kubenswrapper[5107]: I0126 00:22:25.672170 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:25 crc kubenswrapper[5107]: I0126 00:22:25.741757 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:25 crc kubenswrapper[5107]: I0126 00:22:25.744403 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:22:25 crc kubenswrapper[5107]: I0126 00:22:25.785407 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" podStartSLOduration=14.785383337 podStartE2EDuration="14.785383337s" podCreationTimestamp="2026-01-26 00:22:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:22:25.730712408 +0000 UTC m=+790.648306774" watchObservedRunningTime="2026-01-26 00:22:25.785383337 +0000 UTC m=+790.702977683" Jan 26 00:22:57 crc kubenswrapper[5107]: I0126 00:22:57.722025 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ptknl" Jan 26 00:23:30 crc kubenswrapper[5107]: I0126 00:23:30.723949 5107 patch_prober.go:28] interesting pod/machine-config-daemon-94c4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:23:30 crc kubenswrapper[5107]: I0126 00:23:30.725080 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:23:33 crc kubenswrapper[5107]: I0126 00:23:33.729353 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r6gc5"] Jan 26 00:23:33 crc kubenswrapper[5107]: I0126 00:23:33.730254 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r6gc5" podUID="cf80d393-0243-47e1-89a1-ce7110280256" containerName="registry-server" containerID="cri-o://bab410741134ab80c6ebd8f81c890946b46e2f8161e2bc467bfa8f491d39113b" gracePeriod=30 Jan 26 00:23:34 crc kubenswrapper[5107]: I0126 00:23:34.318260 5107 generic.go:358] "Generic (PLEG): container finished" podID="cf80d393-0243-47e1-89a1-ce7110280256" containerID="bab410741134ab80c6ebd8f81c890946b46e2f8161e2bc467bfa8f491d39113b" exitCode=0 Jan 26 00:23:34 crc kubenswrapper[5107]: I0126 00:23:34.318366 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6gc5" event={"ID":"cf80d393-0243-47e1-89a1-ce7110280256","Type":"ContainerDied","Data":"bab410741134ab80c6ebd8f81c890946b46e2f8161e2bc467bfa8f491d39113b"} Jan 26 00:23:34 crc kubenswrapper[5107]: I0126 00:23:34.612358 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r6gc5" Jan 26 00:23:34 crc kubenswrapper[5107]: I0126 00:23:34.726541 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf80d393-0243-47e1-89a1-ce7110280256-catalog-content\") pod \"cf80d393-0243-47e1-89a1-ce7110280256\" (UID: \"cf80d393-0243-47e1-89a1-ce7110280256\") " Jan 26 00:23:34 crc kubenswrapper[5107]: I0126 00:23:34.726631 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf80d393-0243-47e1-89a1-ce7110280256-utilities\") pod \"cf80d393-0243-47e1-89a1-ce7110280256\" (UID: \"cf80d393-0243-47e1-89a1-ce7110280256\") " Jan 26 00:23:34 crc kubenswrapper[5107]: I0126 00:23:34.726694 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx26l\" (UniqueName: \"kubernetes.io/projected/cf80d393-0243-47e1-89a1-ce7110280256-kube-api-access-jx26l\") pod \"cf80d393-0243-47e1-89a1-ce7110280256\" (UID: \"cf80d393-0243-47e1-89a1-ce7110280256\") " Jan 26 00:23:34 crc kubenswrapper[5107]: I0126 00:23:34.730164 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf80d393-0243-47e1-89a1-ce7110280256-utilities" (OuterVolumeSpecName: "utilities") pod "cf80d393-0243-47e1-89a1-ce7110280256" (UID: "cf80d393-0243-47e1-89a1-ce7110280256"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:23:34 crc kubenswrapper[5107]: I0126 00:23:34.743771 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf80d393-0243-47e1-89a1-ce7110280256-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf80d393-0243-47e1-89a1-ce7110280256" (UID: "cf80d393-0243-47e1-89a1-ce7110280256"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:23:34 crc kubenswrapper[5107]: I0126 00:23:34.754540 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf80d393-0243-47e1-89a1-ce7110280256-kube-api-access-jx26l" (OuterVolumeSpecName: "kube-api-access-jx26l") pod "cf80d393-0243-47e1-89a1-ce7110280256" (UID: "cf80d393-0243-47e1-89a1-ce7110280256"). InnerVolumeSpecName "kube-api-access-jx26l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:23:34 crc kubenswrapper[5107]: I0126 00:23:34.828335 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf80d393-0243-47e1-89a1-ce7110280256-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:23:34 crc kubenswrapper[5107]: I0126 00:23:34.828407 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf80d393-0243-47e1-89a1-ce7110280256-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:23:34 crc kubenswrapper[5107]: I0126 00:23:34.828427 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jx26l\" (UniqueName: \"kubernetes.io/projected/cf80d393-0243-47e1-89a1-ce7110280256-kube-api-access-jx26l\") on node \"crc\" DevicePath \"\"" Jan 26 00:23:35 crc kubenswrapper[5107]: I0126 00:23:35.328792 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6gc5" event={"ID":"cf80d393-0243-47e1-89a1-ce7110280256","Type":"ContainerDied","Data":"87e3116476b762fc3a70cffa5eba64ce44e37bdfc1bc18eebc234c9d7799b215"} Jan 26 00:23:35 crc kubenswrapper[5107]: I0126 00:23:35.328856 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r6gc5" Jan 26 00:23:35 crc kubenswrapper[5107]: I0126 00:23:35.329333 5107 scope.go:117] "RemoveContainer" containerID="bab410741134ab80c6ebd8f81c890946b46e2f8161e2bc467bfa8f491d39113b" Jan 26 00:23:35 crc kubenswrapper[5107]: I0126 00:23:35.351692 5107 scope.go:117] "RemoveContainer" containerID="2138282670cae47329d197e2a309ca1a84896edbbb16bc947144a7176f07ed3f" Jan 26 00:23:35 crc kubenswrapper[5107]: I0126 00:23:35.368207 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r6gc5"] Jan 26 00:23:35 crc kubenswrapper[5107]: I0126 00:23:35.373897 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r6gc5"] Jan 26 00:23:35 crc kubenswrapper[5107]: I0126 00:23:35.379995 5107 scope.go:117] "RemoveContainer" containerID="4cb79ab214daf1415e64d24429e2ca2b789ebbf6b72c922b1b0aaa4a1931ef15" Jan 26 00:23:36 crc kubenswrapper[5107]: I0126 00:23:36.121529 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf80d393-0243-47e1-89a1-ce7110280256" path="/var/lib/kubelet/pods/cf80d393-0243-47e1-89a1-ce7110280256/volumes" Jan 26 00:23:38 crc kubenswrapper[5107]: I0126 00:23:38.066397 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw"] Jan 26 00:23:38 crc kubenswrapper[5107]: I0126 00:23:38.069446 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cf80d393-0243-47e1-89a1-ce7110280256" containerName="registry-server" Jan 26 00:23:38 crc kubenswrapper[5107]: I0126 00:23:38.069494 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf80d393-0243-47e1-89a1-ce7110280256" containerName="registry-server" Jan 26 00:23:38 crc kubenswrapper[5107]: I0126 00:23:38.069543 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cf80d393-0243-47e1-89a1-ce7110280256" containerName="extract-content" Jan 26 00:23:38 crc kubenswrapper[5107]: I0126 00:23:38.069558 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf80d393-0243-47e1-89a1-ce7110280256" containerName="extract-content" Jan 26 00:23:38 crc kubenswrapper[5107]: I0126 00:23:38.069605 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cf80d393-0243-47e1-89a1-ce7110280256" containerName="extract-utilities" Jan 26 00:23:38 crc kubenswrapper[5107]: I0126 00:23:38.069614 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf80d393-0243-47e1-89a1-ce7110280256" containerName="extract-utilities" Jan 26 00:23:38 crc kubenswrapper[5107]: I0126 00:23:38.069924 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="cf80d393-0243-47e1-89a1-ce7110280256" containerName="registry-server" Jan 26 00:23:38 crc kubenswrapper[5107]: I0126 00:23:38.095082 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw" Jan 26 00:23:38 crc kubenswrapper[5107]: I0126 00:23:38.095451 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw"] Jan 26 00:23:38 crc kubenswrapper[5107]: I0126 00:23:38.100381 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 26 00:23:38 crc kubenswrapper[5107]: I0126 00:23:38.194017 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/904f82dd-7ba2-482e-b5d4-15f043ddea94-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw\" (UID: \"904f82dd-7ba2-482e-b5d4-15f043ddea94\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw" Jan 26 00:23:38 crc kubenswrapper[5107]: I0126 00:23:38.194249 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/904f82dd-7ba2-482e-b5d4-15f043ddea94-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw\" (UID: \"904f82dd-7ba2-482e-b5d4-15f043ddea94\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw" Jan 26 00:23:38 crc kubenswrapper[5107]: I0126 00:23:38.194308 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkj7d\" (UniqueName: \"kubernetes.io/projected/904f82dd-7ba2-482e-b5d4-15f043ddea94-kube-api-access-vkj7d\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw\" (UID: \"904f82dd-7ba2-482e-b5d4-15f043ddea94\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw" Jan 26 00:23:38 crc kubenswrapper[5107]: I0126 00:23:38.296403 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/904f82dd-7ba2-482e-b5d4-15f043ddea94-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw\" (UID: \"904f82dd-7ba2-482e-b5d4-15f043ddea94\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw" Jan 26 00:23:38 crc kubenswrapper[5107]: I0126 00:23:38.296490 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/904f82dd-7ba2-482e-b5d4-15f043ddea94-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw\" (UID: \"904f82dd-7ba2-482e-b5d4-15f043ddea94\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw" Jan 26 00:23:38 crc kubenswrapper[5107]: I0126 00:23:38.296538 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vkj7d\" (UniqueName: \"kubernetes.io/projected/904f82dd-7ba2-482e-b5d4-15f043ddea94-kube-api-access-vkj7d\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw\" (UID: \"904f82dd-7ba2-482e-b5d4-15f043ddea94\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw" Jan 26 00:23:38 crc kubenswrapper[5107]: I0126 00:23:38.297195 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/904f82dd-7ba2-482e-b5d4-15f043ddea94-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw\" (UID: \"904f82dd-7ba2-482e-b5d4-15f043ddea94\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw" Jan 26 00:23:38 crc kubenswrapper[5107]: I0126 00:23:38.297348 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/904f82dd-7ba2-482e-b5d4-15f043ddea94-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw\" (UID: \"904f82dd-7ba2-482e-b5d4-15f043ddea94\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw" Jan 26 00:23:38 crc kubenswrapper[5107]: I0126 00:23:38.319390 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkj7d\" (UniqueName: \"kubernetes.io/projected/904f82dd-7ba2-482e-b5d4-15f043ddea94-kube-api-access-vkj7d\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw\" (UID: \"904f82dd-7ba2-482e-b5d4-15f043ddea94\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw" Jan 26 00:23:38 crc kubenswrapper[5107]: I0126 00:23:38.422465 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw" Jan 26 00:23:38 crc kubenswrapper[5107]: I0126 00:23:38.731670 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw"] Jan 26 00:23:39 crc kubenswrapper[5107]: I0126 00:23:39.373059 5107 generic.go:358] "Generic (PLEG): container finished" podID="904f82dd-7ba2-482e-b5d4-15f043ddea94" containerID="d2f888affc8063027378fd9ff1ae50919ed56f3795d34157121672775a72bffa" exitCode=0 Jan 26 00:23:39 crc kubenswrapper[5107]: I0126 00:23:39.373402 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw" event={"ID":"904f82dd-7ba2-482e-b5d4-15f043ddea94","Type":"ContainerDied","Data":"d2f888affc8063027378fd9ff1ae50919ed56f3795d34157121672775a72bffa"} Jan 26 00:23:39 crc kubenswrapper[5107]: I0126 00:23:39.373445 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw" event={"ID":"904f82dd-7ba2-482e-b5d4-15f043ddea94","Type":"ContainerStarted","Data":"5e6c07a5b41f17e11a292a357faf6f865272a096014b5412a1e5ddefa00f0f91"} Jan 26 00:23:40 crc kubenswrapper[5107]: I0126 00:23:40.987046 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qjccf"] Jan 26 00:23:41 crc kubenswrapper[5107]: I0126 00:23:41.343335 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qjccf"] Jan 26 00:23:41 crc kubenswrapper[5107]: I0126 00:23:41.343616 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qjccf" Jan 26 00:23:41 crc kubenswrapper[5107]: I0126 00:23:41.443983 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88ae022-232b-4b97-87b8-ab58d0d53b45-catalog-content\") pod \"redhat-operators-qjccf\" (UID: \"b88ae022-232b-4b97-87b8-ab58d0d53b45\") " pod="openshift-marketplace/redhat-operators-qjccf" Jan 26 00:23:41 crc kubenswrapper[5107]: I0126 00:23:41.444444 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvkf6\" (UniqueName: \"kubernetes.io/projected/b88ae022-232b-4b97-87b8-ab58d0d53b45-kube-api-access-lvkf6\") pod \"redhat-operators-qjccf\" (UID: \"b88ae022-232b-4b97-87b8-ab58d0d53b45\") " pod="openshift-marketplace/redhat-operators-qjccf" Jan 26 00:23:41 crc kubenswrapper[5107]: I0126 00:23:41.445313 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88ae022-232b-4b97-87b8-ab58d0d53b45-utilities\") pod \"redhat-operators-qjccf\" (UID: \"b88ae022-232b-4b97-87b8-ab58d0d53b45\") " pod="openshift-marketplace/redhat-operators-qjccf" Jan 26 00:23:41 crc kubenswrapper[5107]: I0126 00:23:41.777993 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lvkf6\" (UniqueName: \"kubernetes.io/projected/b88ae022-232b-4b97-87b8-ab58d0d53b45-kube-api-access-lvkf6\") pod \"redhat-operators-qjccf\" (UID: \"b88ae022-232b-4b97-87b8-ab58d0d53b45\") " pod="openshift-marketplace/redhat-operators-qjccf" Jan 26 00:23:41 crc kubenswrapper[5107]: I0126 00:23:41.778096 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88ae022-232b-4b97-87b8-ab58d0d53b45-utilities\") pod \"redhat-operators-qjccf\" (UID: \"b88ae022-232b-4b97-87b8-ab58d0d53b45\") " pod="openshift-marketplace/redhat-operators-qjccf" Jan 26 00:23:41 crc kubenswrapper[5107]: I0126 00:23:41.778142 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88ae022-232b-4b97-87b8-ab58d0d53b45-catalog-content\") pod \"redhat-operators-qjccf\" (UID: \"b88ae022-232b-4b97-87b8-ab58d0d53b45\") " pod="openshift-marketplace/redhat-operators-qjccf" Jan 26 00:23:41 crc kubenswrapper[5107]: I0126 00:23:41.778965 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88ae022-232b-4b97-87b8-ab58d0d53b45-catalog-content\") pod \"redhat-operators-qjccf\" (UID: \"b88ae022-232b-4b97-87b8-ab58d0d53b45\") " pod="openshift-marketplace/redhat-operators-qjccf" Jan 26 00:23:41 crc kubenswrapper[5107]: I0126 00:23:41.780239 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88ae022-232b-4b97-87b8-ab58d0d53b45-utilities\") pod \"redhat-operators-qjccf\" (UID: \"b88ae022-232b-4b97-87b8-ab58d0d53b45\") " pod="openshift-marketplace/redhat-operators-qjccf" Jan 26 00:23:41 crc kubenswrapper[5107]: I0126 00:23:41.802931 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvkf6\" (UniqueName: \"kubernetes.io/projected/b88ae022-232b-4b97-87b8-ab58d0d53b45-kube-api-access-lvkf6\") pod \"redhat-operators-qjccf\" (UID: \"b88ae022-232b-4b97-87b8-ab58d0d53b45\") " pod="openshift-marketplace/redhat-operators-qjccf" Jan 26 00:23:41 crc kubenswrapper[5107]: I0126 00:23:41.970521 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qjccf" Jan 26 00:23:42 crc kubenswrapper[5107]: I0126 00:23:42.700927 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qjccf"] Jan 26 00:23:43 crc kubenswrapper[5107]: I0126 00:23:43.527511 5107 generic.go:358] "Generic (PLEG): container finished" podID="b88ae022-232b-4b97-87b8-ab58d0d53b45" containerID="9e55c5b2e8d01a853c549ec61b38ce1364398229328ff30a811faadba0cf6acc" exitCode=0 Jan 26 00:23:43 crc kubenswrapper[5107]: I0126 00:23:43.527688 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjccf" event={"ID":"b88ae022-232b-4b97-87b8-ab58d0d53b45","Type":"ContainerDied","Data":"9e55c5b2e8d01a853c549ec61b38ce1364398229328ff30a811faadba0cf6acc"} Jan 26 00:23:43 crc kubenswrapper[5107]: I0126 00:23:43.527847 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjccf" event={"ID":"b88ae022-232b-4b97-87b8-ab58d0d53b45","Type":"ContainerStarted","Data":"d6987f438d55e2f6bcc395d26f0b48177834fc3d17cc5bc1c719c47103b348c2"} Jan 26 00:23:43 crc kubenswrapper[5107]: I0126 00:23:43.535017 5107 generic.go:358] "Generic (PLEG): container finished" podID="904f82dd-7ba2-482e-b5d4-15f043ddea94" containerID="ebfbe3bb3e8ba63129288ddf14ce2e439bdab406ccc58ddbb1a03c4441cd9391" exitCode=0 Jan 26 00:23:43 crc kubenswrapper[5107]: I0126 00:23:43.535258 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw" event={"ID":"904f82dd-7ba2-482e-b5d4-15f043ddea94","Type":"ContainerDied","Data":"ebfbe3bb3e8ba63129288ddf14ce2e439bdab406ccc58ddbb1a03c4441cd9391"} Jan 26 00:23:44 crc kubenswrapper[5107]: I0126 00:23:44.544550 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw" event={"ID":"904f82dd-7ba2-482e-b5d4-15f043ddea94","Type":"ContainerStarted","Data":"a8f6c7be45387c648d44de088075172c2f11c35cd155f872dd83f8167ad1f0a3"} Jan 26 00:23:44 crc kubenswrapper[5107]: I0126 00:23:44.570765 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw" podStartSLOduration=3.748307861 podStartE2EDuration="6.570738558s" podCreationTimestamp="2026-01-26 00:23:38 +0000 UTC" firstStartedPulling="2026-01-26 00:23:39.377243483 +0000 UTC m=+864.294837829" lastFinishedPulling="2026-01-26 00:23:42.19967417 +0000 UTC m=+867.117268526" observedRunningTime="2026-01-26 00:23:44.567148054 +0000 UTC m=+869.484742400" watchObservedRunningTime="2026-01-26 00:23:44.570738558 +0000 UTC m=+869.488332904" Jan 26 00:23:45 crc kubenswrapper[5107]: I0126 00:23:45.559317 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjccf" event={"ID":"b88ae022-232b-4b97-87b8-ab58d0d53b45","Type":"ContainerStarted","Data":"00f05fd189ad51d19a6bed281eb774d88f823d3121b80c4c46e8e11c2ad55122"} Jan 26 00:23:45 crc kubenswrapper[5107]: I0126 00:23:45.562866 5107 generic.go:358] "Generic (PLEG): container finished" podID="904f82dd-7ba2-482e-b5d4-15f043ddea94" containerID="a8f6c7be45387c648d44de088075172c2f11c35cd155f872dd83f8167ad1f0a3" exitCode=0 Jan 26 00:23:45 crc kubenswrapper[5107]: I0126 00:23:45.563006 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw" event={"ID":"904f82dd-7ba2-482e-b5d4-15f043ddea94","Type":"ContainerDied","Data":"a8f6c7be45387c648d44de088075172c2f11c35cd155f872dd83f8167ad1f0a3"} Jan 26 00:23:47 crc kubenswrapper[5107]: I0126 00:23:47.681785 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx"] Jan 26 00:23:47 crc kubenswrapper[5107]: I0126 00:23:47.870195 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx" Jan 26 00:23:47 crc kubenswrapper[5107]: I0126 00:23:47.873378 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx"] Jan 26 00:23:47 crc kubenswrapper[5107]: I0126 00:23:47.934361 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f4334a5-7577-470f-b5f7-32206240626a-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx\" (UID: \"2f4334a5-7577-470f-b5f7-32206240626a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx" Jan 26 00:23:47 crc kubenswrapper[5107]: I0126 00:23:47.935104 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rddf\" (UniqueName: \"kubernetes.io/projected/2f4334a5-7577-470f-b5f7-32206240626a-kube-api-access-9rddf\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx\" (UID: \"2f4334a5-7577-470f-b5f7-32206240626a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx" Jan 26 00:23:47 crc kubenswrapper[5107]: I0126 00:23:47.935323 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f4334a5-7577-470f-b5f7-32206240626a-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx\" (UID: \"2f4334a5-7577-470f-b5f7-32206240626a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx" Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.036371 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9rddf\" (UniqueName: \"kubernetes.io/projected/2f4334a5-7577-470f-b5f7-32206240626a-kube-api-access-9rddf\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx\" (UID: \"2f4334a5-7577-470f-b5f7-32206240626a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx" Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.036495 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f4334a5-7577-470f-b5f7-32206240626a-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx\" (UID: \"2f4334a5-7577-470f-b5f7-32206240626a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx" Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.036548 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f4334a5-7577-470f-b5f7-32206240626a-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx\" (UID: \"2f4334a5-7577-470f-b5f7-32206240626a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx" Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.037675 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f4334a5-7577-470f-b5f7-32206240626a-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx\" (UID: \"2f4334a5-7577-470f-b5f7-32206240626a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx" Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.037689 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f4334a5-7577-470f-b5f7-32206240626a-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx\" (UID: \"2f4334a5-7577-470f-b5f7-32206240626a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx" Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.065474 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rddf\" (UniqueName: \"kubernetes.io/projected/2f4334a5-7577-470f-b5f7-32206240626a-kube-api-access-9rddf\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx\" (UID: \"2f4334a5-7577-470f-b5f7-32206240626a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx" Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.163623 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw" Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.203810 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx" Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.239605 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/904f82dd-7ba2-482e-b5d4-15f043ddea94-util\") pod \"904f82dd-7ba2-482e-b5d4-15f043ddea94\" (UID: \"904f82dd-7ba2-482e-b5d4-15f043ddea94\") " Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.239719 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkj7d\" (UniqueName: \"kubernetes.io/projected/904f82dd-7ba2-482e-b5d4-15f043ddea94-kube-api-access-vkj7d\") pod \"904f82dd-7ba2-482e-b5d4-15f043ddea94\" (UID: \"904f82dd-7ba2-482e-b5d4-15f043ddea94\") " Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.239825 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/904f82dd-7ba2-482e-b5d4-15f043ddea94-bundle\") pod \"904f82dd-7ba2-482e-b5d4-15f043ddea94\" (UID: \"904f82dd-7ba2-482e-b5d4-15f043ddea94\") " Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.242896 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/904f82dd-7ba2-482e-b5d4-15f043ddea94-bundle" (OuterVolumeSpecName: "bundle") pod "904f82dd-7ba2-482e-b5d4-15f043ddea94" (UID: "904f82dd-7ba2-482e-b5d4-15f043ddea94"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.298519 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/904f82dd-7ba2-482e-b5d4-15f043ddea94-util" (OuterVolumeSpecName: "util") pod "904f82dd-7ba2-482e-b5d4-15f043ddea94" (UID: "904f82dd-7ba2-482e-b5d4-15f043ddea94"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.299574 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/904f82dd-7ba2-482e-b5d4-15f043ddea94-kube-api-access-vkj7d" (OuterVolumeSpecName: "kube-api-access-vkj7d") pod "904f82dd-7ba2-482e-b5d4-15f043ddea94" (UID: "904f82dd-7ba2-482e-b5d4-15f043ddea94"). InnerVolumeSpecName "kube-api-access-vkj7d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.341191 5107 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/904f82dd-7ba2-482e-b5d4-15f043ddea94-util\") on node \"crc\" DevicePath \"\"" Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.341252 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vkj7d\" (UniqueName: \"kubernetes.io/projected/904f82dd-7ba2-482e-b5d4-15f043ddea94-kube-api-access-vkj7d\") on node \"crc\" DevicePath \"\"" Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.341317 5107 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/904f82dd-7ba2-482e-b5d4-15f043ddea94-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.623184 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw" event={"ID":"904f82dd-7ba2-482e-b5d4-15f043ddea94","Type":"ContainerDied","Data":"5e6c07a5b41f17e11a292a357faf6f865272a096014b5412a1e5ddefa00f0f91"} Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.623260 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e6c07a5b41f17e11a292a357faf6f865272a096014b5412a1e5ddefa00f0f91" Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.623406 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw" Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.765965 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq"] Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.768383 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="904f82dd-7ba2-482e-b5d4-15f043ddea94" containerName="util" Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.768424 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="904f82dd-7ba2-482e-b5d4-15f043ddea94" containerName="util" Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.768452 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="904f82dd-7ba2-482e-b5d4-15f043ddea94" containerName="extract" Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.768466 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="904f82dd-7ba2-482e-b5d4-15f043ddea94" containerName="extract" Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.768499 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="904f82dd-7ba2-482e-b5d4-15f043ddea94" containerName="pull" Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.768512 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="904f82dd-7ba2-482e-b5d4-15f043ddea94" containerName="pull" Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.768664 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="904f82dd-7ba2-482e-b5d4-15f043ddea94" containerName="extract" Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.858271 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq"] Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.858540 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq" Jan 26 00:23:48 crc kubenswrapper[5107]: I0126 00:23:48.946434 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx"] Jan 26 00:23:48 crc kubenswrapper[5107]: W0126 00:23:48.951533 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f4334a5_7577_470f_b5f7_32206240626a.slice/crio-21a3afe2379fe75475706243bf28acf898a088516e0ea01aaf093bb02f6b0366 WatchSource:0}: Error finding container 21a3afe2379fe75475706243bf28acf898a088516e0ea01aaf093bb02f6b0366: Status 404 returned error can't find the container with id 21a3afe2379fe75475706243bf28acf898a088516e0ea01aaf093bb02f6b0366 Jan 26 00:23:49 crc kubenswrapper[5107]: I0126 00:23:49.050355 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5dk9\" (UniqueName: \"kubernetes.io/projected/9956eca6-8cc8-40ac-9b69-9500db778f1a-kube-api-access-r5dk9\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq\" (UID: \"9956eca6-8cc8-40ac-9b69-9500db778f1a\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq" Jan 26 00:23:49 crc kubenswrapper[5107]: I0126 00:23:49.050417 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9956eca6-8cc8-40ac-9b69-9500db778f1a-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq\" (UID: \"9956eca6-8cc8-40ac-9b69-9500db778f1a\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq" Jan 26 00:23:49 crc kubenswrapper[5107]: I0126 00:23:49.050494 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9956eca6-8cc8-40ac-9b69-9500db778f1a-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq\" (UID: \"9956eca6-8cc8-40ac-9b69-9500db778f1a\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq" Jan 26 00:23:49 crc kubenswrapper[5107]: I0126 00:23:49.151800 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9956eca6-8cc8-40ac-9b69-9500db778f1a-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq\" (UID: \"9956eca6-8cc8-40ac-9b69-9500db778f1a\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq" Jan 26 00:23:49 crc kubenswrapper[5107]: I0126 00:23:49.152331 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9956eca6-8cc8-40ac-9b69-9500db778f1a-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq\" (UID: \"9956eca6-8cc8-40ac-9b69-9500db778f1a\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq" Jan 26 00:23:49 crc kubenswrapper[5107]: I0126 00:23:49.152445 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r5dk9\" (UniqueName: \"kubernetes.io/projected/9956eca6-8cc8-40ac-9b69-9500db778f1a-kube-api-access-r5dk9\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq\" (UID: \"9956eca6-8cc8-40ac-9b69-9500db778f1a\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq" Jan 26 00:23:49 crc kubenswrapper[5107]: I0126 00:23:49.152546 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9956eca6-8cc8-40ac-9b69-9500db778f1a-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq\" (UID: \"9956eca6-8cc8-40ac-9b69-9500db778f1a\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq" Jan 26 00:23:49 crc kubenswrapper[5107]: I0126 00:23:49.153484 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9956eca6-8cc8-40ac-9b69-9500db778f1a-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq\" (UID: \"9956eca6-8cc8-40ac-9b69-9500db778f1a\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq" Jan 26 00:23:49 crc kubenswrapper[5107]: I0126 00:23:49.186296 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5dk9\" (UniqueName: \"kubernetes.io/projected/9956eca6-8cc8-40ac-9b69-9500db778f1a-kube-api-access-r5dk9\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq\" (UID: \"9956eca6-8cc8-40ac-9b69-9500db778f1a\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq" Jan 26 00:23:49 crc kubenswrapper[5107]: I0126 00:23:49.191494 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hsmh8"] Jan 26 00:23:49 crc kubenswrapper[5107]: I0126 00:23:49.191769 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq" Jan 26 00:23:49 crc kubenswrapper[5107]: I0126 00:23:49.208532 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hsmh8"] Jan 26 00:23:49 crc kubenswrapper[5107]: I0126 00:23:49.208728 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hsmh8" Jan 26 00:23:49 crc kubenswrapper[5107]: I0126 00:23:49.253433 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1eae064-42f2-490d-903a-4684b0e5cf58-catalog-content\") pod \"certified-operators-hsmh8\" (UID: \"e1eae064-42f2-490d-903a-4684b0e5cf58\") " pod="openshift-marketplace/certified-operators-hsmh8" Jan 26 00:23:49 crc kubenswrapper[5107]: I0126 00:23:49.253551 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1eae064-42f2-490d-903a-4684b0e5cf58-utilities\") pod \"certified-operators-hsmh8\" (UID: \"e1eae064-42f2-490d-903a-4684b0e5cf58\") " pod="openshift-marketplace/certified-operators-hsmh8" Jan 26 00:23:49 crc kubenswrapper[5107]: I0126 00:23:49.253621 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgjg4\" (UniqueName: \"kubernetes.io/projected/e1eae064-42f2-490d-903a-4684b0e5cf58-kube-api-access-xgjg4\") pod \"certified-operators-hsmh8\" (UID: \"e1eae064-42f2-490d-903a-4684b0e5cf58\") " pod="openshift-marketplace/certified-operators-hsmh8" Jan 26 00:23:49 crc kubenswrapper[5107]: I0126 00:23:49.354869 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xgjg4\" (UniqueName: \"kubernetes.io/projected/e1eae064-42f2-490d-903a-4684b0e5cf58-kube-api-access-xgjg4\") pod \"certified-operators-hsmh8\" (UID: \"e1eae064-42f2-490d-903a-4684b0e5cf58\") " pod="openshift-marketplace/certified-operators-hsmh8" Jan 26 00:23:49 crc kubenswrapper[5107]: I0126 00:23:49.354962 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1eae064-42f2-490d-903a-4684b0e5cf58-catalog-content\") pod \"certified-operators-hsmh8\" (UID: \"e1eae064-42f2-490d-903a-4684b0e5cf58\") " pod="openshift-marketplace/certified-operators-hsmh8" Jan 26 00:23:49 crc kubenswrapper[5107]: I0126 00:23:49.355035 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1eae064-42f2-490d-903a-4684b0e5cf58-utilities\") pod \"certified-operators-hsmh8\" (UID: \"e1eae064-42f2-490d-903a-4684b0e5cf58\") " pod="openshift-marketplace/certified-operators-hsmh8" Jan 26 00:23:49 crc kubenswrapper[5107]: I0126 00:23:49.358259 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1eae064-42f2-490d-903a-4684b0e5cf58-utilities\") pod \"certified-operators-hsmh8\" (UID: \"e1eae064-42f2-490d-903a-4684b0e5cf58\") " pod="openshift-marketplace/certified-operators-hsmh8" Jan 26 00:23:49 crc kubenswrapper[5107]: I0126 00:23:49.359640 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1eae064-42f2-490d-903a-4684b0e5cf58-catalog-content\") pod \"certified-operators-hsmh8\" (UID: \"e1eae064-42f2-490d-903a-4684b0e5cf58\") " pod="openshift-marketplace/certified-operators-hsmh8" Jan 26 00:23:49 crc kubenswrapper[5107]: I0126 00:23:49.375274 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgjg4\" (UniqueName: \"kubernetes.io/projected/e1eae064-42f2-490d-903a-4684b0e5cf58-kube-api-access-xgjg4\") pod \"certified-operators-hsmh8\" (UID: \"e1eae064-42f2-490d-903a-4684b0e5cf58\") " pod="openshift-marketplace/certified-operators-hsmh8" Jan 26 00:23:49 crc kubenswrapper[5107]: I0126 00:23:49.568359 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hsmh8" Jan 26 00:23:49 crc kubenswrapper[5107]: I0126 00:23:49.791234 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx" event={"ID":"2f4334a5-7577-470f-b5f7-32206240626a","Type":"ContainerStarted","Data":"673ff0c180e07bb3694fc1ad8f6f76c6064a65b1de85b7adea5762f0451f8cdf"} Jan 26 00:23:49 crc kubenswrapper[5107]: I0126 00:23:49.791806 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx" event={"ID":"2f4334a5-7577-470f-b5f7-32206240626a","Type":"ContainerStarted","Data":"21a3afe2379fe75475706243bf28acf898a088516e0ea01aaf093bb02f6b0366"} Jan 26 00:23:49 crc kubenswrapper[5107]: I0126 00:23:49.800049 5107 generic.go:358] "Generic (PLEG): container finished" podID="b88ae022-232b-4b97-87b8-ab58d0d53b45" containerID="00f05fd189ad51d19a6bed281eb774d88f823d3121b80c4c46e8e11c2ad55122" exitCode=0 Jan 26 00:23:49 crc kubenswrapper[5107]: I0126 00:23:49.800150 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjccf" event={"ID":"b88ae022-232b-4b97-87b8-ab58d0d53b45","Type":"ContainerDied","Data":"00f05fd189ad51d19a6bed281eb774d88f823d3121b80c4c46e8e11c2ad55122"} Jan 26 00:23:49 crc kubenswrapper[5107]: I0126 00:23:49.839766 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq"] Jan 26 00:23:49 crc kubenswrapper[5107]: W0126 00:23:49.974377 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9956eca6_8cc8_40ac_9b69_9500db778f1a.slice/crio-eeb2d403c2575ad7fc23c5f68d3636ddb927f466353b54415d08eee79548a09c WatchSource:0}: Error finding container eeb2d403c2575ad7fc23c5f68d3636ddb927f466353b54415d08eee79548a09c: Status 404 returned error can't find the container with id eeb2d403c2575ad7fc23c5f68d3636ddb927f466353b54415d08eee79548a09c Jan 26 00:23:50 crc kubenswrapper[5107]: I0126 00:23:50.670657 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hsmh8"] Jan 26 00:23:50 crc kubenswrapper[5107]: I0126 00:23:50.813188 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjccf" event={"ID":"b88ae022-232b-4b97-87b8-ab58d0d53b45","Type":"ContainerStarted","Data":"e9e05a4947636dc98be1457cdd5551bca4e46558a2c8ad27d1b2ea6a8dfc1d2d"} Jan 26 00:23:50 crc kubenswrapper[5107]: I0126 00:23:50.817041 5107 generic.go:358] "Generic (PLEG): container finished" podID="2f4334a5-7577-470f-b5f7-32206240626a" containerID="673ff0c180e07bb3694fc1ad8f6f76c6064a65b1de85b7adea5762f0451f8cdf" exitCode=0 Jan 26 00:23:50 crc kubenswrapper[5107]: I0126 00:23:50.817667 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx" event={"ID":"2f4334a5-7577-470f-b5f7-32206240626a","Type":"ContainerDied","Data":"673ff0c180e07bb3694fc1ad8f6f76c6064a65b1de85b7adea5762f0451f8cdf"} Jan 26 00:23:50 crc kubenswrapper[5107]: I0126 00:23:50.827029 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hsmh8" event={"ID":"e1eae064-42f2-490d-903a-4684b0e5cf58","Type":"ContainerStarted","Data":"18a49595d4acd12ad98d787c2fb77002aa24aa51f73f3bc01bb88ecc27006f53"} Jan 26 00:23:50 crc kubenswrapper[5107]: I0126 00:23:50.831346 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq" event={"ID":"9956eca6-8cc8-40ac-9b69-9500db778f1a","Type":"ContainerStarted","Data":"d0ece00773169159908febe1eacd271ad674bb1e0b46296580fa83142cecb868"} Jan 26 00:23:50 crc kubenswrapper[5107]: I0126 00:23:50.831431 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq" event={"ID":"9956eca6-8cc8-40ac-9b69-9500db778f1a","Type":"ContainerStarted","Data":"eeb2d403c2575ad7fc23c5f68d3636ddb927f466353b54415d08eee79548a09c"} Jan 26 00:23:51 crc kubenswrapper[5107]: I0126 00:23:51.756708 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qjccf" podStartSLOduration=10.426964716 podStartE2EDuration="11.756674395s" podCreationTimestamp="2026-01-26 00:23:40 +0000 UTC" firstStartedPulling="2026-01-26 00:23:43.529262976 +0000 UTC m=+868.446857332" lastFinishedPulling="2026-01-26 00:23:44.858972665 +0000 UTC m=+869.776567011" observedRunningTime="2026-01-26 00:23:51.754250434 +0000 UTC m=+876.671844790" watchObservedRunningTime="2026-01-26 00:23:51.756674395 +0000 UTC m=+876.674268741" Jan 26 00:23:51 crc kubenswrapper[5107]: I0126 00:23:51.971509 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qjccf" Jan 26 00:23:51 crc kubenswrapper[5107]: I0126 00:23:51.971641 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-qjccf" Jan 26 00:23:52 crc kubenswrapper[5107]: I0126 00:23:52.431295 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x"] Jan 26 00:23:52 crc kubenswrapper[5107]: I0126 00:23:52.901436 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x"] Jan 26 00:23:52 crc kubenswrapper[5107]: I0126 00:23:52.901668 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x" Jan 26 00:23:52 crc kubenswrapper[5107]: I0126 00:23:52.969567 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29213ff4-4c9b-4e6d-90be-74a8ef3334c0-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x\" (UID: \"29213ff4-4c9b-4e6d-90be-74a8ef3334c0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x" Jan 26 00:23:52 crc kubenswrapper[5107]: I0126 00:23:52.969761 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29213ff4-4c9b-4e6d-90be-74a8ef3334c0-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x\" (UID: \"29213ff4-4c9b-4e6d-90be-74a8ef3334c0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x" Jan 26 00:23:52 crc kubenswrapper[5107]: I0126 00:23:52.969812 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4xz7\" (UniqueName: \"kubernetes.io/projected/29213ff4-4c9b-4e6d-90be-74a8ef3334c0-kube-api-access-n4xz7\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x\" (UID: \"29213ff4-4c9b-4e6d-90be-74a8ef3334c0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x" Jan 26 00:23:53 crc kubenswrapper[5107]: I0126 00:23:53.013153 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qjccf" podUID="b88ae022-232b-4b97-87b8-ab58d0d53b45" containerName="registry-server" probeResult="failure" output=< Jan 26 00:23:53 crc kubenswrapper[5107]: timeout: failed to connect service ":50051" within 1s Jan 26 00:23:53 crc kubenswrapper[5107]: > Jan 26 00:23:53 crc kubenswrapper[5107]: I0126 00:23:53.071305 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29213ff4-4c9b-4e6d-90be-74a8ef3334c0-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x\" (UID: \"29213ff4-4c9b-4e6d-90be-74a8ef3334c0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x" Jan 26 00:23:53 crc kubenswrapper[5107]: I0126 00:23:53.071389 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n4xz7\" (UniqueName: \"kubernetes.io/projected/29213ff4-4c9b-4e6d-90be-74a8ef3334c0-kube-api-access-n4xz7\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x\" (UID: \"29213ff4-4c9b-4e6d-90be-74a8ef3334c0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x" Jan 26 00:23:53 crc kubenswrapper[5107]: I0126 00:23:53.071443 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29213ff4-4c9b-4e6d-90be-74a8ef3334c0-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x\" (UID: \"29213ff4-4c9b-4e6d-90be-74a8ef3334c0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x" Jan 26 00:23:53 crc kubenswrapper[5107]: I0126 00:23:53.071947 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29213ff4-4c9b-4e6d-90be-74a8ef3334c0-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x\" (UID: \"29213ff4-4c9b-4e6d-90be-74a8ef3334c0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x" Jan 26 00:23:53 crc kubenswrapper[5107]: I0126 00:23:53.072232 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29213ff4-4c9b-4e6d-90be-74a8ef3334c0-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x\" (UID: \"29213ff4-4c9b-4e6d-90be-74a8ef3334c0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x" Jan 26 00:23:53 crc kubenswrapper[5107]: I0126 00:23:53.096595 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4xz7\" (UniqueName: \"kubernetes.io/projected/29213ff4-4c9b-4e6d-90be-74a8ef3334c0-kube-api-access-n4xz7\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x\" (UID: \"29213ff4-4c9b-4e6d-90be-74a8ef3334c0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x" Jan 26 00:23:53 crc kubenswrapper[5107]: I0126 00:23:53.353834 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x" Jan 26 00:23:53 crc kubenswrapper[5107]: I0126 00:23:53.856198 5107 generic.go:358] "Generic (PLEG): container finished" podID="e1eae064-42f2-490d-903a-4684b0e5cf58" containerID="db5047f6a4a60494e068cf55d4cc49eebce3ee0971db1814d5220dfe51ccee70" exitCode=0 Jan 26 00:23:53 crc kubenswrapper[5107]: I0126 00:23:53.856287 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hsmh8" event={"ID":"e1eae064-42f2-490d-903a-4684b0e5cf58","Type":"ContainerDied","Data":"db5047f6a4a60494e068cf55d4cc49eebce3ee0971db1814d5220dfe51ccee70"} Jan 26 00:23:53 crc kubenswrapper[5107]: I0126 00:23:53.860302 5107 generic.go:358] "Generic (PLEG): container finished" podID="9956eca6-8cc8-40ac-9b69-9500db778f1a" containerID="d0ece00773169159908febe1eacd271ad674bb1e0b46296580fa83142cecb868" exitCode=0 Jan 26 00:23:53 crc kubenswrapper[5107]: I0126 00:23:53.860593 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq" event={"ID":"9956eca6-8cc8-40ac-9b69-9500db778f1a","Type":"ContainerDied","Data":"d0ece00773169159908febe1eacd271ad674bb1e0b46296580fa83142cecb868"} Jan 26 00:23:54 crc kubenswrapper[5107]: I0126 00:23:54.293943 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x"] Jan 26 00:23:54 crc kubenswrapper[5107]: I0126 00:23:54.878712 5107 generic.go:358] "Generic (PLEG): container finished" podID="2f4334a5-7577-470f-b5f7-32206240626a" containerID="c50aee1615e5d7bc33ce29100f80acf94c009d792c056aa22c5c31736f10806e" exitCode=0 Jan 26 00:23:54 crc kubenswrapper[5107]: I0126 00:23:54.878866 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx" event={"ID":"2f4334a5-7577-470f-b5f7-32206240626a","Type":"ContainerDied","Data":"c50aee1615e5d7bc33ce29100f80acf94c009d792c056aa22c5c31736f10806e"} Jan 26 00:23:54 crc kubenswrapper[5107]: I0126 00:23:54.884550 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x" event={"ID":"29213ff4-4c9b-4e6d-90be-74a8ef3334c0","Type":"ContainerStarted","Data":"27dfb8b18f707d65b1a1e758b5cdbf62c25bd5ff0b614d36cd348794391fb655"} Jan 26 00:23:54 crc kubenswrapper[5107]: I0126 00:23:54.884618 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x" event={"ID":"29213ff4-4c9b-4e6d-90be-74a8ef3334c0","Type":"ContainerStarted","Data":"d5c47c58bf6033275adb04c1fb76559135d8d9f1495853b484ac3bc42eadd244"} Jan 26 00:23:55 crc kubenswrapper[5107]: I0126 00:23:55.895798 5107 generic.go:358] "Generic (PLEG): container finished" podID="29213ff4-4c9b-4e6d-90be-74a8ef3334c0" containerID="27dfb8b18f707d65b1a1e758b5cdbf62c25bd5ff0b614d36cd348794391fb655" exitCode=0 Jan 26 00:23:55 crc kubenswrapper[5107]: I0126 00:23:55.895866 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x" event={"ID":"29213ff4-4c9b-4e6d-90be-74a8ef3334c0","Type":"ContainerDied","Data":"27dfb8b18f707d65b1a1e758b5cdbf62c25bd5ff0b614d36cd348794391fb655"} Jan 26 00:23:55 crc kubenswrapper[5107]: I0126 00:23:55.898818 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hsmh8" event={"ID":"e1eae064-42f2-490d-903a-4684b0e5cf58","Type":"ContainerStarted","Data":"a91bf7c05ad36f9602828cc435c34f9c47b736a8ed8dac43d62e13a58c155474"} Jan 26 00:23:56 crc kubenswrapper[5107]: I0126 00:23:56.911564 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx" event={"ID":"2f4334a5-7577-470f-b5f7-32206240626a","Type":"ContainerStarted","Data":"01b60edeebb72a8461d5a7c1364cb5c7e7930676cb0c94138fca1c166b872b4b"} Jan 26 00:23:57 crc kubenswrapper[5107]: I0126 00:23:57.047125 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx" podStartSLOduration=6.891678565 podStartE2EDuration="10.047100236s" podCreationTimestamp="2026-01-26 00:23:47 +0000 UTC" firstStartedPulling="2026-01-26 00:23:50.821268335 +0000 UTC m=+875.738862681" lastFinishedPulling="2026-01-26 00:23:53.976690006 +0000 UTC m=+878.894284352" observedRunningTime="2026-01-26 00:23:57.04584607 +0000 UTC m=+881.963440416" watchObservedRunningTime="2026-01-26 00:23:57.047100236 +0000 UTC m=+881.964694582" Jan 26 00:23:57 crc kubenswrapper[5107]: I0126 00:23:57.928941 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq" event={"ID":"9956eca6-8cc8-40ac-9b69-9500db778f1a","Type":"ContainerStarted","Data":"a8b550af18cad0e1bd9341bb604565f60b29c1044b9e0a94ae7f19e7c1d0db13"} Jan 26 00:24:00 crc kubenswrapper[5107]: I0126 00:23:59.944156 5107 generic.go:358] "Generic (PLEG): container finished" podID="2f4334a5-7577-470f-b5f7-32206240626a" containerID="01b60edeebb72a8461d5a7c1364cb5c7e7930676cb0c94138fca1c166b872b4b" exitCode=0 Jan 26 00:24:00 crc kubenswrapper[5107]: I0126 00:23:59.944256 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx" event={"ID":"2f4334a5-7577-470f-b5f7-32206240626a","Type":"ContainerDied","Data":"01b60edeebb72a8461d5a7c1364cb5c7e7930676cb0c94138fca1c166b872b4b"} Jan 26 00:24:00 crc kubenswrapper[5107]: I0126 00:23:59.946009 5107 generic.go:358] "Generic (PLEG): container finished" podID="e1eae064-42f2-490d-903a-4684b0e5cf58" containerID="a91bf7c05ad36f9602828cc435c34f9c47b736a8ed8dac43d62e13a58c155474" exitCode=0 Jan 26 00:24:00 crc kubenswrapper[5107]: I0126 00:23:59.946080 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hsmh8" event={"ID":"e1eae064-42f2-490d-903a-4684b0e5cf58","Type":"ContainerDied","Data":"a91bf7c05ad36f9602828cc435c34f9c47b736a8ed8dac43d62e13a58c155474"} Jan 26 00:24:00 crc kubenswrapper[5107]: I0126 00:23:59.953616 5107 generic.go:358] "Generic (PLEG): container finished" podID="9956eca6-8cc8-40ac-9b69-9500db778f1a" containerID="a8b550af18cad0e1bd9341bb604565f60b29c1044b9e0a94ae7f19e7c1d0db13" exitCode=0 Jan 26 00:24:00 crc kubenswrapper[5107]: I0126 00:23:59.953700 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq" event={"ID":"9956eca6-8cc8-40ac-9b69-9500db778f1a","Type":"ContainerDied","Data":"a8b550af18cad0e1bd9341bb604565f60b29c1044b9e0a94ae7f19e7c1d0db13"} Jan 26 00:24:00 crc kubenswrapper[5107]: I0126 00:24:00.150139 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489784-j7vms"] Jan 26 00:24:00 crc kubenswrapper[5107]: I0126 00:24:00.359505 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489784-j7vms"] Jan 26 00:24:00 crc kubenswrapper[5107]: I0126 00:24:00.360063 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489784-j7vms" Jan 26 00:24:00 crc kubenswrapper[5107]: I0126 00:24:00.377476 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:24:00 crc kubenswrapper[5107]: I0126 00:24:00.377949 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:24:00 crc kubenswrapper[5107]: I0126 00:24:00.378106 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-96gbq\"" Jan 26 00:24:00 crc kubenswrapper[5107]: I0126 00:24:00.386975 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbzrl\" (UniqueName: \"kubernetes.io/projected/1e0d47be-2e55-4a4f-8d6e-8b513823b753-kube-api-access-mbzrl\") pod \"auto-csr-approver-29489784-j7vms\" (UID: \"1e0d47be-2e55-4a4f-8d6e-8b513823b753\") " pod="openshift-infra/auto-csr-approver-29489784-j7vms" Jan 26 00:24:00 crc kubenswrapper[5107]: I0126 00:24:00.488385 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mbzrl\" (UniqueName: \"kubernetes.io/projected/1e0d47be-2e55-4a4f-8d6e-8b513823b753-kube-api-access-mbzrl\") pod \"auto-csr-approver-29489784-j7vms\" (UID: \"1e0d47be-2e55-4a4f-8d6e-8b513823b753\") " pod="openshift-infra/auto-csr-approver-29489784-j7vms" Jan 26 00:24:00 crc kubenswrapper[5107]: I0126 00:24:00.700458 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbzrl\" (UniqueName: \"kubernetes.io/projected/1e0d47be-2e55-4a4f-8d6e-8b513823b753-kube-api-access-mbzrl\") pod \"auto-csr-approver-29489784-j7vms\" (UID: \"1e0d47be-2e55-4a4f-8d6e-8b513823b753\") " pod="openshift-infra/auto-csr-approver-29489784-j7vms" Jan 26 00:24:00 crc kubenswrapper[5107]: I0126 00:24:00.721127 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489784-j7vms" Jan 26 00:24:00 crc kubenswrapper[5107]: I0126 00:24:00.728196 5107 patch_prober.go:28] interesting pod/machine-config-daemon-94c4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:24:00 crc kubenswrapper[5107]: I0126 00:24:00.728785 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:24:01 crc kubenswrapper[5107]: I0126 00:24:01.047272 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hsmh8" event={"ID":"e1eae064-42f2-490d-903a-4684b0e5cf58","Type":"ContainerStarted","Data":"41fb7afcb4558e406a95ffefb30d2616beb8942fece977a2c695c9d388c743fa"} Jan 26 00:24:01 crc kubenswrapper[5107]: I0126 00:24:01.053659 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq" event={"ID":"9956eca6-8cc8-40ac-9b69-9500db778f1a","Type":"ContainerStarted","Data":"f92ec9eaddb2792e44fb6660dd34e6cc48ab12a2fc6366a4621f6ec3f1ff8c16"} Jan 26 00:24:01 crc kubenswrapper[5107]: I0126 00:24:01.224851 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hsmh8" podStartSLOduration=11.295328547 podStartE2EDuration="12.224820495s" podCreationTimestamp="2026-01-26 00:23:49 +0000 UTC" firstStartedPulling="2026-01-26 00:23:53.857160695 +0000 UTC m=+878.774755041" lastFinishedPulling="2026-01-26 00:23:54.786652643 +0000 UTC m=+879.704246989" observedRunningTime="2026-01-26 00:24:01.224478035 +0000 UTC m=+886.142072381" watchObservedRunningTime="2026-01-26 00:24:01.224820495 +0000 UTC m=+886.142414841" Jan 26 00:24:01 crc kubenswrapper[5107]: I0126 00:24:01.595467 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq" podStartSLOduration=10.980457608 podStartE2EDuration="13.595446928s" podCreationTimestamp="2026-01-26 00:23:48 +0000 UTC" firstStartedPulling="2026-01-26 00:23:53.862105378 +0000 UTC m=+878.779699724" lastFinishedPulling="2026-01-26 00:23:56.477094688 +0000 UTC m=+881.394689044" observedRunningTime="2026-01-26 00:24:01.591291708 +0000 UTC m=+886.508886054" watchObservedRunningTime="2026-01-26 00:24:01.595446928 +0000 UTC m=+886.513041274" Jan 26 00:24:02 crc kubenswrapper[5107]: I0126 00:24:02.163767 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qjccf" Jan 26 00:24:02 crc kubenswrapper[5107]: I0126 00:24:02.231609 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx" Jan 26 00:24:02 crc kubenswrapper[5107]: I0126 00:24:02.302280 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qjccf" Jan 26 00:24:02 crc kubenswrapper[5107]: I0126 00:24:02.323416 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f4334a5-7577-470f-b5f7-32206240626a-bundle\") pod \"2f4334a5-7577-470f-b5f7-32206240626a\" (UID: \"2f4334a5-7577-470f-b5f7-32206240626a\") " Jan 26 00:24:02 crc kubenswrapper[5107]: I0126 00:24:02.323677 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f4334a5-7577-470f-b5f7-32206240626a-util\") pod \"2f4334a5-7577-470f-b5f7-32206240626a\" (UID: \"2f4334a5-7577-470f-b5f7-32206240626a\") " Jan 26 00:24:02 crc kubenswrapper[5107]: I0126 00:24:02.323718 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rddf\" (UniqueName: \"kubernetes.io/projected/2f4334a5-7577-470f-b5f7-32206240626a-kube-api-access-9rddf\") pod \"2f4334a5-7577-470f-b5f7-32206240626a\" (UID: \"2f4334a5-7577-470f-b5f7-32206240626a\") " Jan 26 00:24:02 crc kubenswrapper[5107]: I0126 00:24:02.325032 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f4334a5-7577-470f-b5f7-32206240626a-bundle" (OuterVolumeSpecName: "bundle") pod "2f4334a5-7577-470f-b5f7-32206240626a" (UID: "2f4334a5-7577-470f-b5f7-32206240626a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:24:02 crc kubenswrapper[5107]: I0126 00:24:02.325307 5107 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f4334a5-7577-470f-b5f7-32206240626a-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:02 crc kubenswrapper[5107]: I0126 00:24:02.326636 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489784-j7vms"] Jan 26 00:24:02 crc kubenswrapper[5107]: I0126 00:24:02.345257 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f4334a5-7577-470f-b5f7-32206240626a-kube-api-access-9rddf" (OuterVolumeSpecName: "kube-api-access-9rddf") pod "2f4334a5-7577-470f-b5f7-32206240626a" (UID: "2f4334a5-7577-470f-b5f7-32206240626a"). InnerVolumeSpecName "kube-api-access-9rddf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:24:02 crc kubenswrapper[5107]: I0126 00:24:02.347093 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f4334a5-7577-470f-b5f7-32206240626a-util" (OuterVolumeSpecName: "util") pod "2f4334a5-7577-470f-b5f7-32206240626a" (UID: "2f4334a5-7577-470f-b5f7-32206240626a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:24:02 crc kubenswrapper[5107]: I0126 00:24:02.427433 5107 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f4334a5-7577-470f-b5f7-32206240626a-util\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:02 crc kubenswrapper[5107]: I0126 00:24:02.427477 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9rddf\" (UniqueName: \"kubernetes.io/projected/2f4334a5-7577-470f-b5f7-32206240626a-kube-api-access-9rddf\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:03 crc kubenswrapper[5107]: I0126 00:24:03.088010 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx" event={"ID":"2f4334a5-7577-470f-b5f7-32206240626a","Type":"ContainerDied","Data":"21a3afe2379fe75475706243bf28acf898a088516e0ea01aaf093bb02f6b0366"} Jan 26 00:24:03 crc kubenswrapper[5107]: I0126 00:24:03.088505 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21a3afe2379fe75475706243bf28acf898a088516e0ea01aaf093bb02f6b0366" Jan 26 00:24:03 crc kubenswrapper[5107]: I0126 00:24:03.088669 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx" Jan 26 00:24:03 crc kubenswrapper[5107]: I0126 00:24:03.090515 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489784-j7vms" event={"ID":"1e0d47be-2e55-4a4f-8d6e-8b513823b753","Type":"ContainerStarted","Data":"9792475ac539b7ff8f75e0dfc3c2c8d5b72b01acb6e7bb57db1869300aece14d"} Jan 26 00:24:03 crc kubenswrapper[5107]: I0126 00:24:03.092562 5107 generic.go:358] "Generic (PLEG): container finished" podID="9956eca6-8cc8-40ac-9b69-9500db778f1a" containerID="f92ec9eaddb2792e44fb6660dd34e6cc48ab12a2fc6366a4621f6ec3f1ff8c16" exitCode=0 Jan 26 00:24:03 crc kubenswrapper[5107]: I0126 00:24:03.092963 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq" event={"ID":"9956eca6-8cc8-40ac-9b69-9500db778f1a","Type":"ContainerDied","Data":"f92ec9eaddb2792e44fb6660dd34e6cc48ab12a2fc6366a4621f6ec3f1ff8c16"} Jan 26 00:24:03 crc kubenswrapper[5107]: I0126 00:24:03.980810 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qjccf"] Jan 26 00:24:04 crc kubenswrapper[5107]: I0126 00:24:04.100006 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qjccf" podUID="b88ae022-232b-4b97-87b8-ab58d0d53b45" containerName="registry-server" containerID="cri-o://e9e05a4947636dc98be1457cdd5551bca4e46558a2c8ad27d1b2ea6a8dfc1d2d" gracePeriod=2 Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.111202 5107 generic.go:358] "Generic (PLEG): container finished" podID="b88ae022-232b-4b97-87b8-ab58d0d53b45" containerID="e9e05a4947636dc98be1457cdd5551bca4e46558a2c8ad27d1b2ea6a8dfc1d2d" exitCode=0 Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.111274 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjccf" event={"ID":"b88ae022-232b-4b97-87b8-ab58d0d53b45","Type":"ContainerDied","Data":"e9e05a4947636dc98be1457cdd5551bca4e46558a2c8ad27d1b2ea6a8dfc1d2d"} Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.512993 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-xpxw5"] Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.514273 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2f4334a5-7577-470f-b5f7-32206240626a" containerName="util" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.515002 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f4334a5-7577-470f-b5f7-32206240626a" containerName="util" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.515110 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2f4334a5-7577-470f-b5f7-32206240626a" containerName="extract" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.515172 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f4334a5-7577-470f-b5f7-32206240626a" containerName="extract" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.515261 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2f4334a5-7577-470f-b5f7-32206240626a" containerName="pull" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.515327 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f4334a5-7577-470f-b5f7-32206240626a" containerName="pull" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.515549 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="2f4334a5-7577-470f-b5f7-32206240626a" containerName="extract" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.530864 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-xpxw5" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.534989 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.535337 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.535680 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-qsbsh\"" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.542728 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-xpxw5"] Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.633972 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4fxq\" (UniqueName: \"kubernetes.io/projected/123740f4-e15a-41f1-a226-52d4c99d5b2c-kube-api-access-q4fxq\") pod \"obo-prometheus-operator-9bc85b4bf-xpxw5\" (UID: \"123740f4-e15a-41f1-a226-52d4c99d5b2c\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-xpxw5" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.688522 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5fb8f8d664-d2thm"] Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.693632 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fb8f8d664-d2thm" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.697285 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-9cfcd\"" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.697562 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.714675 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5fb8f8d664-xdd8z"] Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.730191 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5fb8f8d664-d2thm"] Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.730477 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fb8f8d664-xdd8z" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.732443 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5fb8f8d664-xdd8z"] Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.739817 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10bc00e1-36a9-4698-a7d5-8d1378427b9e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5fb8f8d664-xdd8z\" (UID: \"10bc00e1-36a9-4698-a7d5-8d1378427b9e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fb8f8d664-xdd8z" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.739919 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c77a1404-d97a-4b52-9272-21ff7b6fe4f7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5fb8f8d664-d2thm\" (UID: \"c77a1404-d97a-4b52-9272-21ff7b6fe4f7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fb8f8d664-d2thm" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.739974 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c77a1404-d97a-4b52-9272-21ff7b6fe4f7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5fb8f8d664-d2thm\" (UID: \"c77a1404-d97a-4b52-9272-21ff7b6fe4f7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fb8f8d664-d2thm" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.740018 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q4fxq\" (UniqueName: \"kubernetes.io/projected/123740f4-e15a-41f1-a226-52d4c99d5b2c-kube-api-access-q4fxq\") pod \"obo-prometheus-operator-9bc85b4bf-xpxw5\" (UID: \"123740f4-e15a-41f1-a226-52d4c99d5b2c\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-xpxw5" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.740063 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/10bc00e1-36a9-4698-a7d5-8d1378427b9e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5fb8f8d664-xdd8z\" (UID: \"10bc00e1-36a9-4698-a7d5-8d1378427b9e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fb8f8d664-xdd8z" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.785641 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4fxq\" (UniqueName: \"kubernetes.io/projected/123740f4-e15a-41f1-a226-52d4c99d5b2c-kube-api-access-q4fxq\") pod \"obo-prometheus-operator-9bc85b4bf-xpxw5\" (UID: \"123740f4-e15a-41f1-a226-52d4c99d5b2c\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-xpxw5" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.845099 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10bc00e1-36a9-4698-a7d5-8d1378427b9e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5fb8f8d664-xdd8z\" (UID: \"10bc00e1-36a9-4698-a7d5-8d1378427b9e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fb8f8d664-xdd8z" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.846185 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c77a1404-d97a-4b52-9272-21ff7b6fe4f7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5fb8f8d664-d2thm\" (UID: \"c77a1404-d97a-4b52-9272-21ff7b6fe4f7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fb8f8d664-d2thm" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.846252 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c77a1404-d97a-4b52-9272-21ff7b6fe4f7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5fb8f8d664-d2thm\" (UID: \"c77a1404-d97a-4b52-9272-21ff7b6fe4f7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fb8f8d664-d2thm" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.848604 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/10bc00e1-36a9-4698-a7d5-8d1378427b9e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5fb8f8d664-xdd8z\" (UID: \"10bc00e1-36a9-4698-a7d5-8d1378427b9e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fb8f8d664-xdd8z" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.848862 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-xpxw5" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.855539 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10bc00e1-36a9-4698-a7d5-8d1378427b9e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5fb8f8d664-xdd8z\" (UID: \"10bc00e1-36a9-4698-a7d5-8d1378427b9e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fb8f8d664-xdd8z" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.866366 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/10bc00e1-36a9-4698-a7d5-8d1378427b9e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5fb8f8d664-xdd8z\" (UID: \"10bc00e1-36a9-4698-a7d5-8d1378427b9e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fb8f8d664-xdd8z" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.870529 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c77a1404-d97a-4b52-9272-21ff7b6fe4f7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5fb8f8d664-d2thm\" (UID: \"c77a1404-d97a-4b52-9272-21ff7b6fe4f7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fb8f8d664-d2thm" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.870659 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c77a1404-d97a-4b52-9272-21ff7b6fe4f7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5fb8f8d664-d2thm\" (UID: \"c77a1404-d97a-4b52-9272-21ff7b6fe4f7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fb8f8d664-d2thm" Jan 26 00:24:05 crc kubenswrapper[5107]: I0126 00:24:05.878185 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-85c68dddb-7ml9v"] Jan 26 00:24:06 crc kubenswrapper[5107]: I0126 00:24:06.020949 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fb8f8d664-d2thm" Jan 26 00:24:06 crc kubenswrapper[5107]: I0126 00:24:06.188377 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fb8f8d664-xdd8z" Jan 26 00:24:06 crc kubenswrapper[5107]: I0126 00:24:06.191728 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-7ml9v"] Jan 26 00:24:06 crc kubenswrapper[5107]: I0126 00:24:06.191963 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-dgrdx"] Jan 26 00:24:06 crc kubenswrapper[5107]: I0126 00:24:06.198572 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-7ml9v" Jan 26 00:24:06 crc kubenswrapper[5107]: I0126 00:24:06.206907 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-6px6f\"" Jan 26 00:24:06 crc kubenswrapper[5107]: I0126 00:24:06.207632 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Jan 26 00:24:06 crc kubenswrapper[5107]: I0126 00:24:06.225644 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-dgrdx" Jan 26 00:24:06 crc kubenswrapper[5107]: I0126 00:24:06.235317 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-jxj4q\"" Jan 26 00:24:06 crc kubenswrapper[5107]: I0126 00:24:06.269076 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-dgrdx"] Jan 26 00:24:06 crc kubenswrapper[5107]: I0126 00:24:06.291850 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/483aa877-5602-47cb-ba02-45775e6d5cd7-openshift-service-ca\") pod \"perses-operator-669c9f96b5-dgrdx\" (UID: \"483aa877-5602-47cb-ba02-45775e6d5cd7\") " pod="openshift-operators/perses-operator-669c9f96b5-dgrdx" Jan 26 00:24:06 crc kubenswrapper[5107]: I0126 00:24:06.291932 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wcgn\" (UniqueName: \"kubernetes.io/projected/d664f8f1-6e8c-4763-b2e5-3ce3cda11786-kube-api-access-4wcgn\") pod \"observability-operator-85c68dddb-7ml9v\" (UID: \"d664f8f1-6e8c-4763-b2e5-3ce3cda11786\") " pod="openshift-operators/observability-operator-85c68dddb-7ml9v" Jan 26 00:24:06 crc kubenswrapper[5107]: I0126 00:24:06.291974 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/d664f8f1-6e8c-4763-b2e5-3ce3cda11786-observability-operator-tls\") pod \"observability-operator-85c68dddb-7ml9v\" (UID: \"d664f8f1-6e8c-4763-b2e5-3ce3cda11786\") " pod="openshift-operators/observability-operator-85c68dddb-7ml9v" Jan 26 00:24:06 crc kubenswrapper[5107]: I0126 00:24:06.291999 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn5x9\" (UniqueName: \"kubernetes.io/projected/483aa877-5602-47cb-ba02-45775e6d5cd7-kube-api-access-nn5x9\") pod \"perses-operator-669c9f96b5-dgrdx\" (UID: \"483aa877-5602-47cb-ba02-45775e6d5cd7\") " pod="openshift-operators/perses-operator-669c9f96b5-dgrdx" Jan 26 00:24:06 crc kubenswrapper[5107]: I0126 00:24:06.393237 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/483aa877-5602-47cb-ba02-45775e6d5cd7-openshift-service-ca\") pod \"perses-operator-669c9f96b5-dgrdx\" (UID: \"483aa877-5602-47cb-ba02-45775e6d5cd7\") " pod="openshift-operators/perses-operator-669c9f96b5-dgrdx" Jan 26 00:24:06 crc kubenswrapper[5107]: I0126 00:24:06.393317 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4wcgn\" (UniqueName: \"kubernetes.io/projected/d664f8f1-6e8c-4763-b2e5-3ce3cda11786-kube-api-access-4wcgn\") pod \"observability-operator-85c68dddb-7ml9v\" (UID: \"d664f8f1-6e8c-4763-b2e5-3ce3cda11786\") " pod="openshift-operators/observability-operator-85c68dddb-7ml9v" Jan 26 00:24:06 crc kubenswrapper[5107]: I0126 00:24:06.393396 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/d664f8f1-6e8c-4763-b2e5-3ce3cda11786-observability-operator-tls\") pod \"observability-operator-85c68dddb-7ml9v\" (UID: \"d664f8f1-6e8c-4763-b2e5-3ce3cda11786\") " pod="openshift-operators/observability-operator-85c68dddb-7ml9v" Jan 26 00:24:06 crc kubenswrapper[5107]: I0126 00:24:06.393423 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nn5x9\" (UniqueName: \"kubernetes.io/projected/483aa877-5602-47cb-ba02-45775e6d5cd7-kube-api-access-nn5x9\") pod \"perses-operator-669c9f96b5-dgrdx\" (UID: \"483aa877-5602-47cb-ba02-45775e6d5cd7\") " pod="openshift-operators/perses-operator-669c9f96b5-dgrdx" Jan 26 00:24:06 crc kubenswrapper[5107]: I0126 00:24:06.394359 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/483aa877-5602-47cb-ba02-45775e6d5cd7-openshift-service-ca\") pod \"perses-operator-669c9f96b5-dgrdx\" (UID: \"483aa877-5602-47cb-ba02-45775e6d5cd7\") " pod="openshift-operators/perses-operator-669c9f96b5-dgrdx" Jan 26 00:24:06 crc kubenswrapper[5107]: I0126 00:24:06.401536 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/d664f8f1-6e8c-4763-b2e5-3ce3cda11786-observability-operator-tls\") pod \"observability-operator-85c68dddb-7ml9v\" (UID: \"d664f8f1-6e8c-4763-b2e5-3ce3cda11786\") " pod="openshift-operators/observability-operator-85c68dddb-7ml9v" Jan 26 00:24:06 crc kubenswrapper[5107]: I0126 00:24:06.422257 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wcgn\" (UniqueName: \"kubernetes.io/projected/d664f8f1-6e8c-4763-b2e5-3ce3cda11786-kube-api-access-4wcgn\") pod \"observability-operator-85c68dddb-7ml9v\" (UID: \"d664f8f1-6e8c-4763-b2e5-3ce3cda11786\") " pod="openshift-operators/observability-operator-85c68dddb-7ml9v" Jan 26 00:24:06 crc kubenswrapper[5107]: I0126 00:24:06.424501 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn5x9\" (UniqueName: \"kubernetes.io/projected/483aa877-5602-47cb-ba02-45775e6d5cd7-kube-api-access-nn5x9\") pod \"perses-operator-669c9f96b5-dgrdx\" (UID: \"483aa877-5602-47cb-ba02-45775e6d5cd7\") " pod="openshift-operators/perses-operator-669c9f96b5-dgrdx" Jan 26 00:24:06 crc kubenswrapper[5107]: I0126 00:24:06.584469 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-7ml9v" Jan 26 00:24:06 crc kubenswrapper[5107]: I0126 00:24:06.610164 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-dgrdx" Jan 26 00:24:09 crc kubenswrapper[5107]: I0126 00:24:09.569388 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-hsmh8" Jan 26 00:24:09 crc kubenswrapper[5107]: I0126 00:24:09.569487 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hsmh8" Jan 26 00:24:09 crc kubenswrapper[5107]: I0126 00:24:09.658701 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hsmh8" Jan 26 00:24:10 crc kubenswrapper[5107]: I0126 00:24:10.352418 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hsmh8" Jan 26 00:24:11 crc kubenswrapper[5107]: I0126 00:24:11.083555 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq" Jan 26 00:24:11 crc kubenswrapper[5107]: I0126 00:24:11.238453 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5dk9\" (UniqueName: \"kubernetes.io/projected/9956eca6-8cc8-40ac-9b69-9500db778f1a-kube-api-access-r5dk9\") pod \"9956eca6-8cc8-40ac-9b69-9500db778f1a\" (UID: \"9956eca6-8cc8-40ac-9b69-9500db778f1a\") " Jan 26 00:24:11 crc kubenswrapper[5107]: I0126 00:24:11.239118 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9956eca6-8cc8-40ac-9b69-9500db778f1a-util\") pod \"9956eca6-8cc8-40ac-9b69-9500db778f1a\" (UID: \"9956eca6-8cc8-40ac-9b69-9500db778f1a\") " Jan 26 00:24:11 crc kubenswrapper[5107]: I0126 00:24:11.239160 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9956eca6-8cc8-40ac-9b69-9500db778f1a-bundle\") pod \"9956eca6-8cc8-40ac-9b69-9500db778f1a\" (UID: \"9956eca6-8cc8-40ac-9b69-9500db778f1a\") " Jan 26 00:24:11 crc kubenswrapper[5107]: I0126 00:24:11.243687 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9956eca6-8cc8-40ac-9b69-9500db778f1a-bundle" (OuterVolumeSpecName: "bundle") pod "9956eca6-8cc8-40ac-9b69-9500db778f1a" (UID: "9956eca6-8cc8-40ac-9b69-9500db778f1a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:24:11 crc kubenswrapper[5107]: I0126 00:24:11.255161 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9956eca6-8cc8-40ac-9b69-9500db778f1a-util" (OuterVolumeSpecName: "util") pod "9956eca6-8cc8-40ac-9b69-9500db778f1a" (UID: "9956eca6-8cc8-40ac-9b69-9500db778f1a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:24:11 crc kubenswrapper[5107]: I0126 00:24:11.261416 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9956eca6-8cc8-40ac-9b69-9500db778f1a-kube-api-access-r5dk9" (OuterVolumeSpecName: "kube-api-access-r5dk9") pod "9956eca6-8cc8-40ac-9b69-9500db778f1a" (UID: "9956eca6-8cc8-40ac-9b69-9500db778f1a"). InnerVolumeSpecName "kube-api-access-r5dk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:24:11 crc kubenswrapper[5107]: I0126 00:24:11.314500 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq" Jan 26 00:24:11 crc kubenswrapper[5107]: I0126 00:24:11.340087 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq" event={"ID":"9956eca6-8cc8-40ac-9b69-9500db778f1a","Type":"ContainerDied","Data":"eeb2d403c2575ad7fc23c5f68d3636ddb927f466353b54415d08eee79548a09c"} Jan 26 00:24:11 crc kubenswrapper[5107]: I0126 00:24:11.340178 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eeb2d403c2575ad7fc23c5f68d3636ddb927f466353b54415d08eee79548a09c" Jan 26 00:24:11 crc kubenswrapper[5107]: I0126 00:24:11.344833 5107 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9956eca6-8cc8-40ac-9b69-9500db778f1a-util\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:11 crc kubenswrapper[5107]: I0126 00:24:11.344868 5107 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9956eca6-8cc8-40ac-9b69-9500db778f1a-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:11 crc kubenswrapper[5107]: I0126 00:24:11.344880 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r5dk9\" (UniqueName: \"kubernetes.io/projected/9956eca6-8cc8-40ac-9b69-9500db778f1a-kube-api-access-r5dk9\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.005200 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qjccf" Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.090384 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvkf6\" (UniqueName: \"kubernetes.io/projected/b88ae022-232b-4b97-87b8-ab58d0d53b45-kube-api-access-lvkf6\") pod \"b88ae022-232b-4b97-87b8-ab58d0d53b45\" (UID: \"b88ae022-232b-4b97-87b8-ab58d0d53b45\") " Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.090462 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88ae022-232b-4b97-87b8-ab58d0d53b45-utilities\") pod \"b88ae022-232b-4b97-87b8-ab58d0d53b45\" (UID: \"b88ae022-232b-4b97-87b8-ab58d0d53b45\") " Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.090524 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88ae022-232b-4b97-87b8-ab58d0d53b45-catalog-content\") pod \"b88ae022-232b-4b97-87b8-ab58d0d53b45\" (UID: \"b88ae022-232b-4b97-87b8-ab58d0d53b45\") " Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.102080 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b88ae022-232b-4b97-87b8-ab58d0d53b45-utilities" (OuterVolumeSpecName: "utilities") pod "b88ae022-232b-4b97-87b8-ab58d0d53b45" (UID: "b88ae022-232b-4b97-87b8-ab58d0d53b45"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.111166 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b88ae022-232b-4b97-87b8-ab58d0d53b45-kube-api-access-lvkf6" (OuterVolumeSpecName: "kube-api-access-lvkf6") pod "b88ae022-232b-4b97-87b8-ab58d0d53b45" (UID: "b88ae022-232b-4b97-87b8-ab58d0d53b45"). InnerVolumeSpecName "kube-api-access-lvkf6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.192582 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lvkf6\" (UniqueName: \"kubernetes.io/projected/b88ae022-232b-4b97-87b8-ab58d0d53b45-kube-api-access-lvkf6\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.194093 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88ae022-232b-4b97-87b8-ab58d0d53b45-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.231611 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-dgrdx"] Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.252084 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5fb8f8d664-d2thm"] Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.256126 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-xpxw5"] Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.326535 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b88ae022-232b-4b97-87b8-ab58d0d53b45-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b88ae022-232b-4b97-87b8-ab58d0d53b45" (UID: "b88ae022-232b-4b97-87b8-ab58d0d53b45"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.332029 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-xpxw5" event={"ID":"123740f4-e15a-41f1-a226-52d4c99d5b2c","Type":"ContainerStarted","Data":"e0f1d2d761214208a3c4d600518383b80052b15af410655ff244a17e0e594697"} Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.340307 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-dgrdx" event={"ID":"483aa877-5602-47cb-ba02-45775e6d5cd7","Type":"ContainerStarted","Data":"6ff568febea7218058459e9a325bcbd6fd41360c449564400f4df7285bee8992"} Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.343283 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fb8f8d664-d2thm" event={"ID":"c77a1404-d97a-4b52-9272-21ff7b6fe4f7","Type":"ContainerStarted","Data":"366b074bf11a020da9de046eb6394e590892fd1576f7c0570f2ff8fbb1b024b6"} Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.348147 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjccf" event={"ID":"b88ae022-232b-4b97-87b8-ab58d0d53b45","Type":"ContainerDied","Data":"d6987f438d55e2f6bcc395d26f0b48177834fc3d17cc5bc1c719c47103b348c2"} Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.348222 5107 scope.go:117] "RemoveContainer" containerID="e9e05a4947636dc98be1457cdd5551bca4e46558a2c8ad27d1b2ea6a8dfc1d2d" Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.348421 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qjccf" Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.359651 5107 generic.go:358] "Generic (PLEG): container finished" podID="29213ff4-4c9b-4e6d-90be-74a8ef3334c0" containerID="8c62beed35d33b95eee3fa55740ffc90930ccf9514c527cb6972e5bc159eeefd" exitCode=0 Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.359760 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x" event={"ID":"29213ff4-4c9b-4e6d-90be-74a8ef3334c0","Type":"ContainerDied","Data":"8c62beed35d33b95eee3fa55740ffc90930ccf9514c527cb6972e5bc159eeefd"} Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.367683 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489784-j7vms" event={"ID":"1e0d47be-2e55-4a4f-8d6e-8b513823b753","Type":"ContainerStarted","Data":"ca2b3b8c02bb8fbf9781de2148957e38923de7866b7568ad785b010fe74a0187"} Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.380253 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hsmh8"] Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.388212 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hsmh8" podUID="e1eae064-42f2-490d-903a-4684b0e5cf58" containerName="registry-server" containerID="cri-o://41fb7afcb4558e406a95ffefb30d2616beb8942fece977a2c695c9d388c743fa" gracePeriod=2 Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.400296 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88ae022-232b-4b97-87b8-ab58d0d53b45-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.401232 5107 scope.go:117] "RemoveContainer" containerID="00f05fd189ad51d19a6bed281eb774d88f823d3121b80c4c46e8e11c2ad55122" Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.429547 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qjccf"] Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.431670 5107 scope.go:117] "RemoveContainer" containerID="9e55c5b2e8d01a853c549ec61b38ce1364398229328ff30a811faadba0cf6acc" Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.455074 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qjccf"] Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.465792 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5fb8f8d664-xdd8z"] Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.474660 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29489784-j7vms" podStartSLOduration=3.741780967 podStartE2EDuration="12.474627992s" podCreationTimestamp="2026-01-26 00:24:00 +0000 UTC" firstStartedPulling="2026-01-26 00:24:02.345819969 +0000 UTC m=+887.263414315" lastFinishedPulling="2026-01-26 00:24:11.078666994 +0000 UTC m=+895.996261340" observedRunningTime="2026-01-26 00:24:12.447774054 +0000 UTC m=+897.365368410" watchObservedRunningTime="2026-01-26 00:24:12.474627992 +0000 UTC m=+897.392222338" Jan 26 00:24:12 crc kubenswrapper[5107]: I0126 00:24:12.492817 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-7ml9v"] Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.077266 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hsmh8" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.211992 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1eae064-42f2-490d-903a-4684b0e5cf58-catalog-content\") pod \"e1eae064-42f2-490d-903a-4684b0e5cf58\" (UID: \"e1eae064-42f2-490d-903a-4684b0e5cf58\") " Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.212352 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgjg4\" (UniqueName: \"kubernetes.io/projected/e1eae064-42f2-490d-903a-4684b0e5cf58-kube-api-access-xgjg4\") pod \"e1eae064-42f2-490d-903a-4684b0e5cf58\" (UID: \"e1eae064-42f2-490d-903a-4684b0e5cf58\") " Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.213623 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1eae064-42f2-490d-903a-4684b0e5cf58-utilities\") pod \"e1eae064-42f2-490d-903a-4684b0e5cf58\" (UID: \"e1eae064-42f2-490d-903a-4684b0e5cf58\") " Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.215829 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1eae064-42f2-490d-903a-4684b0e5cf58-utilities" (OuterVolumeSpecName: "utilities") pod "e1eae064-42f2-490d-903a-4684b0e5cf58" (UID: "e1eae064-42f2-490d-903a-4684b0e5cf58"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.245230 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1eae064-42f2-490d-903a-4684b0e5cf58-kube-api-access-xgjg4" (OuterVolumeSpecName: "kube-api-access-xgjg4") pod "e1eae064-42f2-490d-903a-4684b0e5cf58" (UID: "e1eae064-42f2-490d-903a-4684b0e5cf58"). InnerVolumeSpecName "kube-api-access-xgjg4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.288900 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1eae064-42f2-490d-903a-4684b0e5cf58-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1eae064-42f2-490d-903a-4684b0e5cf58" (UID: "e1eae064-42f2-490d-903a-4684b0e5cf58"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.315695 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xgjg4\" (UniqueName: \"kubernetes.io/projected/e1eae064-42f2-490d-903a-4684b0e5cf58-kube-api-access-xgjg4\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.316153 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1eae064-42f2-490d-903a-4684b0e5cf58-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.316334 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1eae064-42f2-490d-903a-4684b0e5cf58-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.396623 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fb8f8d664-xdd8z" event={"ID":"10bc00e1-36a9-4698-a7d5-8d1378427b9e","Type":"ContainerStarted","Data":"95930f6fc372e76851bc43f1b764b5051fef0a1e16bc9004facd84f90263ddec"} Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.447304 5107 generic.go:358] "Generic (PLEG): container finished" podID="29213ff4-4c9b-4e6d-90be-74a8ef3334c0" containerID="874614289fd22c520716611d04ae5ceebe8590cde80f8b17afa5b4d0260ba1d6" exitCode=0 Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.447409 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x" event={"ID":"29213ff4-4c9b-4e6d-90be-74a8ef3334c0","Type":"ContainerDied","Data":"874614289fd22c520716611d04ae5ceebe8590cde80f8b17afa5b4d0260ba1d6"} Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.452657 5107 generic.go:358] "Generic (PLEG): container finished" podID="1e0d47be-2e55-4a4f-8d6e-8b513823b753" containerID="ca2b3b8c02bb8fbf9781de2148957e38923de7866b7568ad785b010fe74a0187" exitCode=0 Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.452858 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489784-j7vms" event={"ID":"1e0d47be-2e55-4a4f-8d6e-8b513823b753","Type":"ContainerDied","Data":"ca2b3b8c02bb8fbf9781de2148957e38923de7866b7568ad785b010fe74a0187"} Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.466266 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-7ml9v" event={"ID":"d664f8f1-6e8c-4763-b2e5-3ce3cda11786","Type":"ContainerStarted","Data":"9ad0f59936e597c168e6a99801c5e1d9919654699142a538006dd7206d12953b"} Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.474079 5107 generic.go:358] "Generic (PLEG): container finished" podID="e1eae064-42f2-490d-903a-4684b0e5cf58" containerID="41fb7afcb4558e406a95ffefb30d2616beb8942fece977a2c695c9d388c743fa" exitCode=0 Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.474213 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hsmh8" event={"ID":"e1eae064-42f2-490d-903a-4684b0e5cf58","Type":"ContainerDied","Data":"41fb7afcb4558e406a95ffefb30d2616beb8942fece977a2c695c9d388c743fa"} Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.474936 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hsmh8" event={"ID":"e1eae064-42f2-490d-903a-4684b0e5cf58","Type":"ContainerDied","Data":"18a49595d4acd12ad98d787c2fb77002aa24aa51f73f3bc01bb88ecc27006f53"} Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.475069 5107 scope.go:117] "RemoveContainer" containerID="41fb7afcb4558e406a95ffefb30d2616beb8942fece977a2c695c9d388c743fa" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.474473 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hsmh8" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.619470 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hsmh8"] Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.625979 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hsmh8"] Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.659073 5107 scope.go:117] "RemoveContainer" containerID="a91bf7c05ad36f9602828cc435c34f9c47b736a8ed8dac43d62e13a58c155474" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.837801 5107 scope.go:117] "RemoveContainer" containerID="db5047f6a4a60494e068cf55d4cc49eebce3ee0971db1814d5220dfe51ccee70" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.886338 5107 scope.go:117] "RemoveContainer" containerID="41fb7afcb4558e406a95ffefb30d2616beb8942fece977a2c695c9d388c743fa" Jan 26 00:24:13 crc kubenswrapper[5107]: E0126 00:24:13.893039 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41fb7afcb4558e406a95ffefb30d2616beb8942fece977a2c695c9d388c743fa\": container with ID starting with 41fb7afcb4558e406a95ffefb30d2616beb8942fece977a2c695c9d388c743fa not found: ID does not exist" containerID="41fb7afcb4558e406a95ffefb30d2616beb8942fece977a2c695c9d388c743fa" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.893108 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41fb7afcb4558e406a95ffefb30d2616beb8942fece977a2c695c9d388c743fa"} err="failed to get container status \"41fb7afcb4558e406a95ffefb30d2616beb8942fece977a2c695c9d388c743fa\": rpc error: code = NotFound desc = could not find container \"41fb7afcb4558e406a95ffefb30d2616beb8942fece977a2c695c9d388c743fa\": container with ID starting with 41fb7afcb4558e406a95ffefb30d2616beb8942fece977a2c695c9d388c743fa not found: ID does not exist" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.893142 5107 scope.go:117] "RemoveContainer" containerID="a91bf7c05ad36f9602828cc435c34f9c47b736a8ed8dac43d62e13a58c155474" Jan 26 00:24:13 crc kubenswrapper[5107]: E0126 00:24:13.893844 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a91bf7c05ad36f9602828cc435c34f9c47b736a8ed8dac43d62e13a58c155474\": container with ID starting with a91bf7c05ad36f9602828cc435c34f9c47b736a8ed8dac43d62e13a58c155474 not found: ID does not exist" containerID="a91bf7c05ad36f9602828cc435c34f9c47b736a8ed8dac43d62e13a58c155474" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.893876 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a91bf7c05ad36f9602828cc435c34f9c47b736a8ed8dac43d62e13a58c155474"} err="failed to get container status \"a91bf7c05ad36f9602828cc435c34f9c47b736a8ed8dac43d62e13a58c155474\": rpc error: code = NotFound desc = could not find container \"a91bf7c05ad36f9602828cc435c34f9c47b736a8ed8dac43d62e13a58c155474\": container with ID starting with a91bf7c05ad36f9602828cc435c34f9c47b736a8ed8dac43d62e13a58c155474 not found: ID does not exist" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.893909 5107 scope.go:117] "RemoveContainer" containerID="db5047f6a4a60494e068cf55d4cc49eebce3ee0971db1814d5220dfe51ccee70" Jan 26 00:24:13 crc kubenswrapper[5107]: E0126 00:24:13.894624 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db5047f6a4a60494e068cf55d4cc49eebce3ee0971db1814d5220dfe51ccee70\": container with ID starting with db5047f6a4a60494e068cf55d4cc49eebce3ee0971db1814d5220dfe51ccee70 not found: ID does not exist" containerID="db5047f6a4a60494e068cf55d4cc49eebce3ee0971db1814d5220dfe51ccee70" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.894645 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db5047f6a4a60494e068cf55d4cc49eebce3ee0971db1814d5220dfe51ccee70"} err="failed to get container status \"db5047f6a4a60494e068cf55d4cc49eebce3ee0971db1814d5220dfe51ccee70\": rpc error: code = NotFound desc = could not find container \"db5047f6a4a60494e068cf55d4cc49eebce3ee0971db1814d5220dfe51ccee70\": container with ID starting with db5047f6a4a60494e068cf55d4cc49eebce3ee0971db1814d5220dfe51ccee70 not found: ID does not exist" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.960545 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-g5htt"] Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.962340 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b88ae022-232b-4b97-87b8-ab58d0d53b45" containerName="registry-server" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.962757 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b88ae022-232b-4b97-87b8-ab58d0d53b45" containerName="registry-server" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.962918 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e1eae064-42f2-490d-903a-4684b0e5cf58" containerName="extract-utilities" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.963001 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1eae064-42f2-490d-903a-4684b0e5cf58" containerName="extract-utilities" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.963087 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9956eca6-8cc8-40ac-9b69-9500db778f1a" containerName="util" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.963179 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="9956eca6-8cc8-40ac-9b69-9500db778f1a" containerName="util" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.963246 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e1eae064-42f2-490d-903a-4684b0e5cf58" containerName="extract-content" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.963320 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1eae064-42f2-490d-903a-4684b0e5cf58" containerName="extract-content" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.963403 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9956eca6-8cc8-40ac-9b69-9500db778f1a" containerName="extract" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.963477 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="9956eca6-8cc8-40ac-9b69-9500db778f1a" containerName="extract" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.963550 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b88ae022-232b-4b97-87b8-ab58d0d53b45" containerName="extract-content" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.963622 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b88ae022-232b-4b97-87b8-ab58d0d53b45" containerName="extract-content" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.963697 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b88ae022-232b-4b97-87b8-ab58d0d53b45" containerName="extract-utilities" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.963755 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b88ae022-232b-4b97-87b8-ab58d0d53b45" containerName="extract-utilities" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.963817 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e1eae064-42f2-490d-903a-4684b0e5cf58" containerName="registry-server" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.963868 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1eae064-42f2-490d-903a-4684b0e5cf58" containerName="registry-server" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.965943 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9956eca6-8cc8-40ac-9b69-9500db778f1a" containerName="pull" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.966022 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="9956eca6-8cc8-40ac-9b69-9500db778f1a" containerName="pull" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.966339 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="e1eae064-42f2-490d-903a-4684b0e5cf58" containerName="registry-server" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.966404 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="b88ae022-232b-4b97-87b8-ab58d0d53b45" containerName="registry-server" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.966452 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="9956eca6-8cc8-40ac-9b69-9500db778f1a" containerName="extract" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.973632 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-g5htt"] Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.974021 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-g5htt" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.981698 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.982223 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Jan 26 00:24:13 crc kubenswrapper[5107]: I0126 00:24:13.985832 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-wttpd\"" Jan 26 00:24:14 crc kubenswrapper[5107]: I0126 00:24:14.275358 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trm97\" (UniqueName: \"kubernetes.io/projected/fa388ea8-bf8f-4440-8bc3-d619d62dd368-kube-api-access-trm97\") pod \"interconnect-operator-78b9bd8798-g5htt\" (UID: \"fa388ea8-bf8f-4440-8bc3-d619d62dd368\") " pod="service-telemetry/interconnect-operator-78b9bd8798-g5htt" Jan 26 00:24:14 crc kubenswrapper[5107]: I0126 00:24:14.369646 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b88ae022-232b-4b97-87b8-ab58d0d53b45" path="/var/lib/kubelet/pods/b88ae022-232b-4b97-87b8-ab58d0d53b45/volumes" Jan 26 00:24:14 crc kubenswrapper[5107]: I0126 00:24:14.371715 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1eae064-42f2-490d-903a-4684b0e5cf58" path="/var/lib/kubelet/pods/e1eae064-42f2-490d-903a-4684b0e5cf58/volumes" Jan 26 00:24:14 crc kubenswrapper[5107]: I0126 00:24:14.387184 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-trm97\" (UniqueName: \"kubernetes.io/projected/fa388ea8-bf8f-4440-8bc3-d619d62dd368-kube-api-access-trm97\") pod \"interconnect-operator-78b9bd8798-g5htt\" (UID: \"fa388ea8-bf8f-4440-8bc3-d619d62dd368\") " pod="service-telemetry/interconnect-operator-78b9bd8798-g5htt" Jan 26 00:24:14 crc kubenswrapper[5107]: I0126 00:24:14.428819 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-trm97\" (UniqueName: \"kubernetes.io/projected/fa388ea8-bf8f-4440-8bc3-d619d62dd368-kube-api-access-trm97\") pod \"interconnect-operator-78b9bd8798-g5htt\" (UID: \"fa388ea8-bf8f-4440-8bc3-d619d62dd368\") " pod="service-telemetry/interconnect-operator-78b9bd8798-g5htt" Jan 26 00:24:14 crc kubenswrapper[5107]: I0126 00:24:14.615155 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-g5htt" Jan 26 00:24:15 crc kubenswrapper[5107]: I0126 00:24:15.274681 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-g5htt"] Jan 26 00:24:15 crc kubenswrapper[5107]: W0126 00:24:15.306691 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa388ea8_bf8f_4440_8bc3_d619d62dd368.slice/crio-c55afd11cbbd681bfdbe087470369936cd2384f8bbcbdbde4567f2da35d3140d WatchSource:0}: Error finding container c55afd11cbbd681bfdbe087470369936cd2384f8bbcbdbde4567f2da35d3140d: Status 404 returned error can't find the container with id c55afd11cbbd681bfdbe087470369936cd2384f8bbcbdbde4567f2da35d3140d Jan 26 00:24:15 crc kubenswrapper[5107]: I0126 00:24:15.395734 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x" Jan 26 00:24:15 crc kubenswrapper[5107]: I0126 00:24:15.404067 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489784-j7vms" Jan 26 00:24:15 crc kubenswrapper[5107]: I0126 00:24:15.526816 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29213ff4-4c9b-4e6d-90be-74a8ef3334c0-bundle\") pod \"29213ff4-4c9b-4e6d-90be-74a8ef3334c0\" (UID: \"29213ff4-4c9b-4e6d-90be-74a8ef3334c0\") " Jan 26 00:24:15 crc kubenswrapper[5107]: I0126 00:24:15.527065 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4xz7\" (UniqueName: \"kubernetes.io/projected/29213ff4-4c9b-4e6d-90be-74a8ef3334c0-kube-api-access-n4xz7\") pod \"29213ff4-4c9b-4e6d-90be-74a8ef3334c0\" (UID: \"29213ff4-4c9b-4e6d-90be-74a8ef3334c0\") " Jan 26 00:24:15 crc kubenswrapper[5107]: I0126 00:24:15.527119 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29213ff4-4c9b-4e6d-90be-74a8ef3334c0-util\") pod \"29213ff4-4c9b-4e6d-90be-74a8ef3334c0\" (UID: \"29213ff4-4c9b-4e6d-90be-74a8ef3334c0\") " Jan 26 00:24:15 crc kubenswrapper[5107]: I0126 00:24:15.527146 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbzrl\" (UniqueName: \"kubernetes.io/projected/1e0d47be-2e55-4a4f-8d6e-8b513823b753-kube-api-access-mbzrl\") pod \"1e0d47be-2e55-4a4f-8d6e-8b513823b753\" (UID: \"1e0d47be-2e55-4a4f-8d6e-8b513823b753\") " Jan 26 00:24:15 crc kubenswrapper[5107]: I0126 00:24:15.531974 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29213ff4-4c9b-4e6d-90be-74a8ef3334c0-bundle" (OuterVolumeSpecName: "bundle") pod "29213ff4-4c9b-4e6d-90be-74a8ef3334c0" (UID: "29213ff4-4c9b-4e6d-90be-74a8ef3334c0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:24:15 crc kubenswrapper[5107]: I0126 00:24:15.545213 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e0d47be-2e55-4a4f-8d6e-8b513823b753-kube-api-access-mbzrl" (OuterVolumeSpecName: "kube-api-access-mbzrl") pod "1e0d47be-2e55-4a4f-8d6e-8b513823b753" (UID: "1e0d47be-2e55-4a4f-8d6e-8b513823b753"). InnerVolumeSpecName "kube-api-access-mbzrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:24:15 crc kubenswrapper[5107]: I0126 00:24:15.561065 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29213ff4-4c9b-4e6d-90be-74a8ef3334c0-util" (OuterVolumeSpecName: "util") pod "29213ff4-4c9b-4e6d-90be-74a8ef3334c0" (UID: "29213ff4-4c9b-4e6d-90be-74a8ef3334c0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:24:15 crc kubenswrapper[5107]: I0126 00:24:15.561861 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29213ff4-4c9b-4e6d-90be-74a8ef3334c0-kube-api-access-n4xz7" (OuterVolumeSpecName: "kube-api-access-n4xz7") pod "29213ff4-4c9b-4e6d-90be-74a8ef3334c0" (UID: "29213ff4-4c9b-4e6d-90be-74a8ef3334c0"). InnerVolumeSpecName "kube-api-access-n4xz7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:24:15 crc kubenswrapper[5107]: I0126 00:24:15.575964 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489778-pbg8c"] Jan 26 00:24:15 crc kubenswrapper[5107]: I0126 00:24:15.583743 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-g5htt" event={"ID":"fa388ea8-bf8f-4440-8bc3-d619d62dd368","Type":"ContainerStarted","Data":"c55afd11cbbd681bfdbe087470369936cd2384f8bbcbdbde4567f2da35d3140d"} Jan 26 00:24:15 crc kubenswrapper[5107]: I0126 00:24:15.585952 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489778-pbg8c"] Jan 26 00:24:15 crc kubenswrapper[5107]: I0126 00:24:15.592582 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x" event={"ID":"29213ff4-4c9b-4e6d-90be-74a8ef3334c0","Type":"ContainerDied","Data":"d5c47c58bf6033275adb04c1fb76559135d8d9f1495853b484ac3bc42eadd244"} Jan 26 00:24:15 crc kubenswrapper[5107]: I0126 00:24:15.592663 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5c47c58bf6033275adb04c1fb76559135d8d9f1495853b484ac3bc42eadd244" Jan 26 00:24:15 crc kubenswrapper[5107]: I0126 00:24:15.592611 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x" Jan 26 00:24:15 crc kubenswrapper[5107]: I0126 00:24:15.619603 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489784-j7vms" event={"ID":"1e0d47be-2e55-4a4f-8d6e-8b513823b753","Type":"ContainerDied","Data":"9792475ac539b7ff8f75e0dfc3c2c8d5b72b01acb6e7bb57db1869300aece14d"} Jan 26 00:24:15 crc kubenswrapper[5107]: I0126 00:24:15.619659 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9792475ac539b7ff8f75e0dfc3c2c8d5b72b01acb6e7bb57db1869300aece14d" Jan 26 00:24:15 crc kubenswrapper[5107]: I0126 00:24:15.619748 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489784-j7vms" Jan 26 00:24:15 crc kubenswrapper[5107]: I0126 00:24:15.629148 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mbzrl\" (UniqueName: \"kubernetes.io/projected/1e0d47be-2e55-4a4f-8d6e-8b513823b753-kube-api-access-mbzrl\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:15 crc kubenswrapper[5107]: I0126 00:24:15.629196 5107 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29213ff4-4c9b-4e6d-90be-74a8ef3334c0-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:15 crc kubenswrapper[5107]: I0126 00:24:15.629209 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n4xz7\" (UniqueName: \"kubernetes.io/projected/29213ff4-4c9b-4e6d-90be-74a8ef3334c0-kube-api-access-n4xz7\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:15 crc kubenswrapper[5107]: I0126 00:24:15.629221 5107 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29213ff4-4c9b-4e6d-90be-74a8ef3334c0-util\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:16 crc kubenswrapper[5107]: I0126 00:24:16.132362 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2bb130b-2e77-4120-b7f1-9a67acbbbb4c" path="/var/lib/kubelet/pods/e2bb130b-2e77-4120-b7f1-9a67acbbbb4c/volumes" Jan 26 00:24:16 crc kubenswrapper[5107]: I0126 00:24:16.623021 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-544d96db9c-fxx6v"] Jan 26 00:24:16 crc kubenswrapper[5107]: I0126 00:24:16.623815 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="29213ff4-4c9b-4e6d-90be-74a8ef3334c0" containerName="extract" Jan 26 00:24:16 crc kubenswrapper[5107]: I0126 00:24:16.623833 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="29213ff4-4c9b-4e6d-90be-74a8ef3334c0" containerName="extract" Jan 26 00:24:16 crc kubenswrapper[5107]: I0126 00:24:16.623845 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1e0d47be-2e55-4a4f-8d6e-8b513823b753" containerName="oc" Jan 26 00:24:16 crc kubenswrapper[5107]: I0126 00:24:16.623851 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e0d47be-2e55-4a4f-8d6e-8b513823b753" containerName="oc" Jan 26 00:24:16 crc kubenswrapper[5107]: I0126 00:24:16.623876 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="29213ff4-4c9b-4e6d-90be-74a8ef3334c0" containerName="pull" Jan 26 00:24:16 crc kubenswrapper[5107]: I0126 00:24:16.623882 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="29213ff4-4c9b-4e6d-90be-74a8ef3334c0" containerName="pull" Jan 26 00:24:16 crc kubenswrapper[5107]: I0126 00:24:16.624055 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="29213ff4-4c9b-4e6d-90be-74a8ef3334c0" containerName="util" Jan 26 00:24:16 crc kubenswrapper[5107]: I0126 00:24:16.624061 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="29213ff4-4c9b-4e6d-90be-74a8ef3334c0" containerName="util" Jan 26 00:24:16 crc kubenswrapper[5107]: I0126 00:24:16.624266 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="1e0d47be-2e55-4a4f-8d6e-8b513823b753" containerName="oc" Jan 26 00:24:16 crc kubenswrapper[5107]: I0126 00:24:16.624285 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="29213ff4-4c9b-4e6d-90be-74a8ef3334c0" containerName="extract" Jan 26 00:24:16 crc kubenswrapper[5107]: I0126 00:24:16.702529 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-544d96db9c-fxx6v"] Jan 26 00:24:16 crc kubenswrapper[5107]: I0126 00:24:16.702775 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-544d96db9c-fxx6v" Jan 26 00:24:16 crc kubenswrapper[5107]: I0126 00:24:16.705583 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-lqs9j\"" Jan 26 00:24:16 crc kubenswrapper[5107]: I0126 00:24:16.707802 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Jan 26 00:24:16 crc kubenswrapper[5107]: I0126 00:24:16.768872 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a9643b13-54a6-4da3-8a06-de88fb261565-webhook-cert\") pod \"elastic-operator-544d96db9c-fxx6v\" (UID: \"a9643b13-54a6-4da3-8a06-de88fb261565\") " pod="service-telemetry/elastic-operator-544d96db9c-fxx6v" Jan 26 00:24:16 crc kubenswrapper[5107]: I0126 00:24:16.768986 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a9643b13-54a6-4da3-8a06-de88fb261565-apiservice-cert\") pod \"elastic-operator-544d96db9c-fxx6v\" (UID: \"a9643b13-54a6-4da3-8a06-de88fb261565\") " pod="service-telemetry/elastic-operator-544d96db9c-fxx6v" Jan 26 00:24:16 crc kubenswrapper[5107]: I0126 00:24:16.769030 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp4cc\" (UniqueName: \"kubernetes.io/projected/a9643b13-54a6-4da3-8a06-de88fb261565-kube-api-access-jp4cc\") pod \"elastic-operator-544d96db9c-fxx6v\" (UID: \"a9643b13-54a6-4da3-8a06-de88fb261565\") " pod="service-telemetry/elastic-operator-544d96db9c-fxx6v" Jan 26 00:24:16 crc kubenswrapper[5107]: I0126 00:24:16.878920 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a9643b13-54a6-4da3-8a06-de88fb261565-webhook-cert\") pod \"elastic-operator-544d96db9c-fxx6v\" (UID: \"a9643b13-54a6-4da3-8a06-de88fb261565\") " pod="service-telemetry/elastic-operator-544d96db9c-fxx6v" Jan 26 00:24:16 crc kubenswrapper[5107]: I0126 00:24:16.879029 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a9643b13-54a6-4da3-8a06-de88fb261565-apiservice-cert\") pod \"elastic-operator-544d96db9c-fxx6v\" (UID: \"a9643b13-54a6-4da3-8a06-de88fb261565\") " pod="service-telemetry/elastic-operator-544d96db9c-fxx6v" Jan 26 00:24:16 crc kubenswrapper[5107]: I0126 00:24:16.879087 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jp4cc\" (UniqueName: \"kubernetes.io/projected/a9643b13-54a6-4da3-8a06-de88fb261565-kube-api-access-jp4cc\") pod \"elastic-operator-544d96db9c-fxx6v\" (UID: \"a9643b13-54a6-4da3-8a06-de88fb261565\") " pod="service-telemetry/elastic-operator-544d96db9c-fxx6v" Jan 26 00:24:16 crc kubenswrapper[5107]: I0126 00:24:16.901733 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a9643b13-54a6-4da3-8a06-de88fb261565-apiservice-cert\") pod \"elastic-operator-544d96db9c-fxx6v\" (UID: \"a9643b13-54a6-4da3-8a06-de88fb261565\") " pod="service-telemetry/elastic-operator-544d96db9c-fxx6v" Jan 26 00:24:16 crc kubenswrapper[5107]: I0126 00:24:16.901836 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a9643b13-54a6-4da3-8a06-de88fb261565-webhook-cert\") pod \"elastic-operator-544d96db9c-fxx6v\" (UID: \"a9643b13-54a6-4da3-8a06-de88fb261565\") " pod="service-telemetry/elastic-operator-544d96db9c-fxx6v" Jan 26 00:24:16 crc kubenswrapper[5107]: I0126 00:24:16.919161 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp4cc\" (UniqueName: \"kubernetes.io/projected/a9643b13-54a6-4da3-8a06-de88fb261565-kube-api-access-jp4cc\") pod \"elastic-operator-544d96db9c-fxx6v\" (UID: \"a9643b13-54a6-4da3-8a06-de88fb261565\") " pod="service-telemetry/elastic-operator-544d96db9c-fxx6v" Jan 26 00:24:17 crc kubenswrapper[5107]: I0126 00:24:17.041629 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f2mpq_2e5342d5-2d0c-458d-94b7-25c802ce298a/kube-multus/0.log" Jan 26 00:24:17 crc kubenswrapper[5107]: I0126 00:24:17.041897 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-544d96db9c-fxx6v" Jan 26 00:24:17 crc kubenswrapper[5107]: I0126 00:24:17.043155 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f2mpq_2e5342d5-2d0c-458d-94b7-25c802ce298a/kube-multus/0.log" Jan 26 00:24:17 crc kubenswrapper[5107]: I0126 00:24:17.065208 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Jan 26 00:24:17 crc kubenswrapper[5107]: I0126 00:24:17.079334 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Jan 26 00:24:17 crc kubenswrapper[5107]: I0126 00:24:17.089058 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 26 00:24:17 crc kubenswrapper[5107]: I0126 00:24:17.099002 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 26 00:24:17 crc kubenswrapper[5107]: I0126 00:24:17.418422 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-544d96db9c-fxx6v"] Jan 26 00:24:17 crc kubenswrapper[5107]: W0126 00:24:17.441612 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9643b13_54a6_4da3_8a06_de88fb261565.slice/crio-7f4e5357385243d919217a2caffc65fb9b530a62e8e1cf9169d790f9a1e87077 WatchSource:0}: Error finding container 7f4e5357385243d919217a2caffc65fb9b530a62e8e1cf9169d790f9a1e87077: Status 404 returned error can't find the container with id 7f4e5357385243d919217a2caffc65fb9b530a62e8e1cf9169d790f9a1e87077 Jan 26 00:24:17 crc kubenswrapper[5107]: I0126 00:24:17.659476 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-544d96db9c-fxx6v" event={"ID":"a9643b13-54a6-4da3-8a06-de88fb261565","Type":"ContainerStarted","Data":"7f4e5357385243d919217a2caffc65fb9b530a62e8e1cf9169d790f9a1e87077"} Jan 26 00:24:27 crc kubenswrapper[5107]: I0126 00:24:27.818522 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vqqrf"] Jan 26 00:24:28 crc kubenswrapper[5107]: I0126 00:24:28.444670 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vqqrf" Jan 26 00:24:28 crc kubenswrapper[5107]: I0126 00:24:28.445024 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b52c3dde-b654-47d6-98ab-a794750ba7ea-catalog-content\") pod \"community-operators-vqqrf\" (UID: \"b52c3dde-b654-47d6-98ab-a794750ba7ea\") " pod="openshift-marketplace/community-operators-vqqrf" Jan 26 00:24:28 crc kubenswrapper[5107]: I0126 00:24:28.445953 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt74l\" (UniqueName: \"kubernetes.io/projected/b52c3dde-b654-47d6-98ab-a794750ba7ea-kube-api-access-gt74l\") pod \"community-operators-vqqrf\" (UID: \"b52c3dde-b654-47d6-98ab-a794750ba7ea\") " pod="openshift-marketplace/community-operators-vqqrf" Jan 26 00:24:28 crc kubenswrapper[5107]: I0126 00:24:28.448647 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b52c3dde-b654-47d6-98ab-a794750ba7ea-utilities\") pod \"community-operators-vqqrf\" (UID: \"b52c3dde-b654-47d6-98ab-a794750ba7ea\") " pod="openshift-marketplace/community-operators-vqqrf" Jan 26 00:24:28 crc kubenswrapper[5107]: I0126 00:24:28.589647 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b52c3dde-b654-47d6-98ab-a794750ba7ea-utilities\") pod \"community-operators-vqqrf\" (UID: \"b52c3dde-b654-47d6-98ab-a794750ba7ea\") " pod="openshift-marketplace/community-operators-vqqrf" Jan 26 00:24:28 crc kubenswrapper[5107]: I0126 00:24:28.589732 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b52c3dde-b654-47d6-98ab-a794750ba7ea-catalog-content\") pod \"community-operators-vqqrf\" (UID: \"b52c3dde-b654-47d6-98ab-a794750ba7ea\") " pod="openshift-marketplace/community-operators-vqqrf" Jan 26 00:24:28 crc kubenswrapper[5107]: I0126 00:24:28.589754 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gt74l\" (UniqueName: \"kubernetes.io/projected/b52c3dde-b654-47d6-98ab-a794750ba7ea-kube-api-access-gt74l\") pod \"community-operators-vqqrf\" (UID: \"b52c3dde-b654-47d6-98ab-a794750ba7ea\") " pod="openshift-marketplace/community-operators-vqqrf" Jan 26 00:24:28 crc kubenswrapper[5107]: I0126 00:24:28.590639 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b52c3dde-b654-47d6-98ab-a794750ba7ea-catalog-content\") pod \"community-operators-vqqrf\" (UID: \"b52c3dde-b654-47d6-98ab-a794750ba7ea\") " pod="openshift-marketplace/community-operators-vqqrf" Jan 26 00:24:28 crc kubenswrapper[5107]: I0126 00:24:28.590768 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b52c3dde-b654-47d6-98ab-a794750ba7ea-utilities\") pod \"community-operators-vqqrf\" (UID: \"b52c3dde-b654-47d6-98ab-a794750ba7ea\") " pod="openshift-marketplace/community-operators-vqqrf" Jan 26 00:24:28 crc kubenswrapper[5107]: I0126 00:24:28.602587 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vqqrf"] Jan 26 00:24:28 crc kubenswrapper[5107]: I0126 00:24:28.678418 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt74l\" (UniqueName: \"kubernetes.io/projected/b52c3dde-b654-47d6-98ab-a794750ba7ea-kube-api-access-gt74l\") pod \"community-operators-vqqrf\" (UID: \"b52c3dde-b654-47d6-98ab-a794750ba7ea\") " pod="openshift-marketplace/community-operators-vqqrf" Jan 26 00:24:28 crc kubenswrapper[5107]: I0126 00:24:28.938391 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vqqrf" Jan 26 00:24:30 crc kubenswrapper[5107]: I0126 00:24:30.787196 5107 patch_prober.go:28] interesting pod/machine-config-daemon-94c4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:24:30 crc kubenswrapper[5107]: I0126 00:24:30.787765 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:24:30 crc kubenswrapper[5107]: I0126 00:24:30.787817 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" Jan 26 00:24:30 crc kubenswrapper[5107]: I0126 00:24:30.788674 5107 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ed7fa55042f2cc4045dc49359ff131078dd30efec1ec5c7e0bdd12d2f213019e"} pod="openshift-machine-config-operator/machine-config-daemon-94c4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:24:30 crc kubenswrapper[5107]: I0126 00:24:30.788785 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" containerID="cri-o://ed7fa55042f2cc4045dc49359ff131078dd30efec1ec5c7e0bdd12d2f213019e" gracePeriod=600 Jan 26 00:24:32 crc kubenswrapper[5107]: I0126 00:24:32.058398 5107 generic.go:358] "Generic (PLEG): container finished" podID="7d907601-1852-43f9-8a70-ef4e71351e81" containerID="ed7fa55042f2cc4045dc49359ff131078dd30efec1ec5c7e0bdd12d2f213019e" exitCode=0 Jan 26 00:24:32 crc kubenswrapper[5107]: I0126 00:24:32.058508 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" event={"ID":"7d907601-1852-43f9-8a70-ef4e71351e81","Type":"ContainerDied","Data":"ed7fa55042f2cc4045dc49359ff131078dd30efec1ec5c7e0bdd12d2f213019e"} Jan 26 00:24:32 crc kubenswrapper[5107]: I0126 00:24:32.058607 5107 scope.go:117] "RemoveContainer" containerID="fe8dfc3c3a0dc6bbcbdfa5c6d9274312703627715669e0a705943cc27e300da3" Jan 26 00:24:33 crc kubenswrapper[5107]: I0126 00:24:33.350114 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-tds9n"] Jan 26 00:24:37 crc kubenswrapper[5107]: I0126 00:24:37.029379 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-tds9n"] Jan 26 00:24:37 crc kubenswrapper[5107]: I0126 00:24:37.029749 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-tds9n" Jan 26 00:24:37 crc kubenswrapper[5107]: I0126 00:24:37.032331 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Jan 26 00:24:37 crc kubenswrapper[5107]: I0126 00:24:37.032981 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:24:37 crc kubenswrapper[5107]: I0126 00:24:37.033007 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-h6nnk\"" Jan 26 00:24:37 crc kubenswrapper[5107]: I0126 00:24:37.083220 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc4b6\" (UniqueName: \"kubernetes.io/projected/78721aca-96f9-4226-8912-52d79b0b6261-kube-api-access-qc4b6\") pod \"cert-manager-operator-controller-manager-64c74584c4-tds9n\" (UID: \"78721aca-96f9-4226-8912-52d79b0b6261\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-tds9n" Jan 26 00:24:37 crc kubenswrapper[5107]: I0126 00:24:37.083605 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78721aca-96f9-4226-8912-52d79b0b6261-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-tds9n\" (UID: \"78721aca-96f9-4226-8912-52d79b0b6261\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-tds9n" Jan 26 00:24:37 crc kubenswrapper[5107]: I0126 00:24:37.185536 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qc4b6\" (UniqueName: \"kubernetes.io/projected/78721aca-96f9-4226-8912-52d79b0b6261-kube-api-access-qc4b6\") pod \"cert-manager-operator-controller-manager-64c74584c4-tds9n\" (UID: \"78721aca-96f9-4226-8912-52d79b0b6261\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-tds9n" Jan 26 00:24:37 crc kubenswrapper[5107]: I0126 00:24:37.185600 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78721aca-96f9-4226-8912-52d79b0b6261-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-tds9n\" (UID: \"78721aca-96f9-4226-8912-52d79b0b6261\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-tds9n" Jan 26 00:24:37 crc kubenswrapper[5107]: I0126 00:24:37.186651 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78721aca-96f9-4226-8912-52d79b0b6261-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-tds9n\" (UID: \"78721aca-96f9-4226-8912-52d79b0b6261\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-tds9n" Jan 26 00:24:37 crc kubenswrapper[5107]: I0126 00:24:37.211920 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc4b6\" (UniqueName: \"kubernetes.io/projected/78721aca-96f9-4226-8912-52d79b0b6261-kube-api-access-qc4b6\") pod \"cert-manager-operator-controller-manager-64c74584c4-tds9n\" (UID: \"78721aca-96f9-4226-8912-52d79b0b6261\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-tds9n" Jan 26 00:24:37 crc kubenswrapper[5107]: I0126 00:24:37.355699 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-tds9n" Jan 26 00:24:52 crc kubenswrapper[5107]: I0126 00:24:52.030618 5107 scope.go:117] "RemoveContainer" containerID="f7b2a467179547ad601467006c2bdf83998fecf2a87fe3837025efd6f8bef2f5" Jan 26 00:25:05 crc kubenswrapper[5107]: I0126 00:25:05.628335 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vqqrf"] Jan 26 00:25:05 crc kubenswrapper[5107]: I0126 00:25:05.679842 5107 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 00:25:05 crc kubenswrapper[5107]: I0126 00:25:05.701447 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-tds9n"] Jan 26 00:25:06 crc kubenswrapper[5107]: W0126 00:25:06.014152 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78721aca_96f9_4226_8912_52d79b0b6261.slice/crio-4f929961b5a2137ecdda7b4f20a7a325cfabf11f35e9fe4862b22133e83f953b WatchSource:0}: Error finding container 4f929961b5a2137ecdda7b4f20a7a325cfabf11f35e9fe4862b22133e83f953b: Status 404 returned error can't find the container with id 4f929961b5a2137ecdda7b4f20a7a325cfabf11f35e9fe4862b22133e83f953b Jan 26 00:25:06 crc kubenswrapper[5107]: I0126 00:25:06.412868 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" event={"ID":"7d907601-1852-43f9-8a70-ef4e71351e81","Type":"ContainerStarted","Data":"e28c2aa1f735d66e536651d9e0f8d196d2dccaf318caefe5b09e5743bda32586"} Jan 26 00:25:06 crc kubenswrapper[5107]: I0126 00:25:06.416206 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vqqrf" event={"ID":"b52c3dde-b654-47d6-98ab-a794750ba7ea","Type":"ContainerStarted","Data":"6b7a54bdaf8c5359e82820304b70b72d9f88aeb4dc622cb0088b45f3d394daae"} Jan 26 00:25:06 crc kubenswrapper[5107]: I0126 00:25:06.418487 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-tds9n" event={"ID":"78721aca-96f9-4226-8912-52d79b0b6261","Type":"ContainerStarted","Data":"4f929961b5a2137ecdda7b4f20a7a325cfabf11f35e9fe4862b22133e83f953b"} Jan 26 00:25:07 crc kubenswrapper[5107]: I0126 00:25:07.428439 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-dgrdx" event={"ID":"483aa877-5602-47cb-ba02-45775e6d5cd7","Type":"ContainerStarted","Data":"bfb3c07d6f096d8e6ad292cbb20e90f79ab105943d9841ace57114a06df3a08b"} Jan 26 00:25:07 crc kubenswrapper[5107]: I0126 00:25:07.429534 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-669c9f96b5-dgrdx" Jan 26 00:25:07 crc kubenswrapper[5107]: I0126 00:25:07.430989 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fb8f8d664-xdd8z" event={"ID":"10bc00e1-36a9-4698-a7d5-8d1378427b9e","Type":"ContainerStarted","Data":"67df8a5648af882f4091d6ddf515aa97efb6dd7c24680cf02fb85599dfa67845"} Jan 26 00:25:07 crc kubenswrapper[5107]: I0126 00:25:07.441540 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fb8f8d664-d2thm" event={"ID":"c77a1404-d97a-4b52-9272-21ff7b6fe4f7","Type":"ContainerStarted","Data":"b5df11c595d924d9ca64a91019872bfd136443294e2c451d85d737c5ac4037ca"} Jan 26 00:25:07 crc kubenswrapper[5107]: I0126 00:25:07.445586 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-544d96db9c-fxx6v" event={"ID":"a9643b13-54a6-4da3-8a06-de88fb261565","Type":"ContainerStarted","Data":"018c99b73261e0fc2d459c9070278a049ad623006ee2321ea7c88ade9d9c779a"} Jan 26 00:25:07 crc kubenswrapper[5107]: I0126 00:25:07.447540 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-g5htt" event={"ID":"fa388ea8-bf8f-4440-8bc3-d619d62dd368","Type":"ContainerStarted","Data":"29ee5eaea564a865e59be38ebc8af9d6bc1b50cd9ced13ca5214163cbb814c6e"} Jan 26 00:25:07 crc kubenswrapper[5107]: I0126 00:25:07.449920 5107 generic.go:358] "Generic (PLEG): container finished" podID="b52c3dde-b654-47d6-98ab-a794750ba7ea" containerID="52f3e0616225b2b76f89d7dd7c0090b20514c883fec9e5a8bd74f44e8bed2dcb" exitCode=0 Jan 26 00:25:07 crc kubenswrapper[5107]: I0126 00:25:07.449999 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vqqrf" event={"ID":"b52c3dde-b654-47d6-98ab-a794750ba7ea","Type":"ContainerDied","Data":"52f3e0616225b2b76f89d7dd7c0090b20514c883fec9e5a8bd74f44e8bed2dcb"} Jan 26 00:25:07 crc kubenswrapper[5107]: I0126 00:25:07.459403 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-xpxw5" event={"ID":"123740f4-e15a-41f1-a226-52d4c99d5b2c","Type":"ContainerStarted","Data":"2da5589c878f2a8fc8f5daf95135f90a2aae27a8855f2547d7623b7bf1331319"} Jan 26 00:25:07 crc kubenswrapper[5107]: I0126 00:25:07.468467 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-669c9f96b5-dgrdx" podStartSLOduration=9.434626804 podStartE2EDuration="1m2.468438486s" podCreationTimestamp="2026-01-26 00:24:05 +0000 UTC" firstStartedPulling="2026-01-26 00:24:12.276193795 +0000 UTC m=+897.193788141" lastFinishedPulling="2026-01-26 00:25:05.310005477 +0000 UTC m=+950.227599823" observedRunningTime="2026-01-26 00:25:07.459762465 +0000 UTC m=+952.377356811" watchObservedRunningTime="2026-01-26 00:25:07.468438486 +0000 UTC m=+952.386032832" Jan 26 00:25:07 crc kubenswrapper[5107]: I0126 00:25:07.469739 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-7ml9v" event={"ID":"d664f8f1-6e8c-4763-b2e5-3ce3cda11786","Type":"ContainerStarted","Data":"478e5509cd4cebec76777703f6fd13f6667be237eece2c0f5f7d2a956e47fc1a"} Jan 26 00:25:07 crc kubenswrapper[5107]: I0126 00:25:07.470246 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-85c68dddb-7ml9v" Jan 26 00:25:07 crc kubenswrapper[5107]: I0126 00:25:07.472322 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-85c68dddb-7ml9v" Jan 26 00:25:07 crc kubenswrapper[5107]: I0126 00:25:07.486753 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-xpxw5" podStartSLOduration=9.42378059 podStartE2EDuration="1m2.486722886s" podCreationTimestamp="2026-01-26 00:24:05 +0000 UTC" firstStartedPulling="2026-01-26 00:24:12.29328863 +0000 UTC m=+897.210882976" lastFinishedPulling="2026-01-26 00:25:05.356230926 +0000 UTC m=+950.273825272" observedRunningTime="2026-01-26 00:25:07.478744095 +0000 UTC m=+952.396338441" watchObservedRunningTime="2026-01-26 00:25:07.486722886 +0000 UTC m=+952.404317232" Jan 26 00:25:07 crc kubenswrapper[5107]: I0126 00:25:07.526294 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fb8f8d664-d2thm" podStartSLOduration=9.426487538 podStartE2EDuration="1m2.52621755s" podCreationTimestamp="2026-01-26 00:24:05 +0000 UTC" firstStartedPulling="2026-01-26 00:24:12.294360531 +0000 UTC m=+897.211954877" lastFinishedPulling="2026-01-26 00:25:05.394090543 +0000 UTC m=+950.311684889" observedRunningTime="2026-01-26 00:25:07.499071193 +0000 UTC m=+952.416665539" watchObservedRunningTime="2026-01-26 00:25:07.52621755 +0000 UTC m=+952.443811906" Jan 26 00:25:07 crc kubenswrapper[5107]: I0126 00:25:07.572200 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-g5htt" podStartSLOduration=4.558940333 podStartE2EDuration="54.572169879s" podCreationTimestamp="2026-01-26 00:24:13 +0000 UTC" firstStartedPulling="2026-01-26 00:24:15.310694815 +0000 UTC m=+900.228289161" lastFinishedPulling="2026-01-26 00:25:05.32392436 +0000 UTC m=+950.241518707" observedRunningTime="2026-01-26 00:25:07.562205871 +0000 UTC m=+952.479800217" watchObservedRunningTime="2026-01-26 00:25:07.572169879 +0000 UTC m=+952.489764225" Jan 26 00:25:07 crc kubenswrapper[5107]: I0126 00:25:07.595220 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fb8f8d664-xdd8z" podStartSLOduration=9.842643449 podStartE2EDuration="1m2.595191546s" podCreationTimestamp="2026-01-26 00:24:05 +0000 UTC" firstStartedPulling="2026-01-26 00:24:12.557549043 +0000 UTC m=+897.475143389" lastFinishedPulling="2026-01-26 00:25:05.31009714 +0000 UTC m=+950.227691486" observedRunningTime="2026-01-26 00:25:07.588312777 +0000 UTC m=+952.505907123" watchObservedRunningTime="2026-01-26 00:25:07.595191546 +0000 UTC m=+952.512785892" Jan 26 00:25:07 crc kubenswrapper[5107]: I0126 00:25:07.627856 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-544d96db9c-fxx6v" podStartSLOduration=3.458588868 podStartE2EDuration="51.627833021s" podCreationTimestamp="2026-01-26 00:24:16 +0000 UTC" firstStartedPulling="2026-01-26 00:24:17.457395434 +0000 UTC m=+902.374989780" lastFinishedPulling="2026-01-26 00:25:05.626639587 +0000 UTC m=+950.544233933" observedRunningTime="2026-01-26 00:25:07.6274293 +0000 UTC m=+952.545023666" watchObservedRunningTime="2026-01-26 00:25:07.627833021 +0000 UTC m=+952.545427367" Jan 26 00:25:07 crc kubenswrapper[5107]: I0126 00:25:07.663225 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-85c68dddb-7ml9v" podStartSLOduration=9.870474766 podStartE2EDuration="1m2.663199326s" podCreationTimestamp="2026-01-26 00:24:05 +0000 UTC" firstStartedPulling="2026-01-26 00:24:12.532868769 +0000 UTC m=+897.450463115" lastFinishedPulling="2026-01-26 00:25:05.325593329 +0000 UTC m=+950.243187675" observedRunningTime="2026-01-26 00:25:07.660793266 +0000 UTC m=+952.578387612" watchObservedRunningTime="2026-01-26 00:25:07.663199326 +0000 UTC m=+952.580793672" Jan 26 00:25:07 crc kubenswrapper[5107]: I0126 00:25:07.778308 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.062951 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.063438 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.070525 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.070671 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.070694 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.070800 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.070849 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.070921 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.070972 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-66mkd\"" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.071083 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.071959 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.171754 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.171822 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.171852 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.171873 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.171913 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.171942 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/a8f17221-ddb9-4086-9482-deaa9be2efa0-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.171979 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.172028 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/a8f17221-ddb9-4086-9482-deaa9be2efa0-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.172046 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.172073 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.172089 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.172108 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/a8f17221-ddb9-4086-9482-deaa9be2efa0-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.172122 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a8f17221-ddb9-4086-9482-deaa9be2efa0-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.172145 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.172164 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.274065 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.274124 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.274146 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/a8f17221-ddb9-4086-9482-deaa9be2efa0-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.274170 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a8f17221-ddb9-4086-9482-deaa9be2efa0-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.274189 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.274208 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.274236 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.274255 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.274283 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.274300 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.274321 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.274342 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/a8f17221-ddb9-4086-9482-deaa9be2efa0-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.274378 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.274428 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/a8f17221-ddb9-4086-9482-deaa9be2efa0-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.274447 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.283783 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.284164 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.285291 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a8f17221-ddb9-4086-9482-deaa9be2efa0-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.286448 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.286793 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.290201 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/a8f17221-ddb9-4086-9482-deaa9be2efa0-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.292433 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/a8f17221-ddb9-4086-9482-deaa9be2efa0-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.294624 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.294741 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.295524 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.296157 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.298746 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.299082 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/a8f17221-ddb9-4086-9482-deaa9be2efa0-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.299159 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.309031 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/a8f17221-ddb9-4086-9482-deaa9be2efa0-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"a8f17221-ddb9-4086-9482-deaa9be2efa0\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:08 crc kubenswrapper[5107]: I0126 00:25:08.388637 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:09 crc kubenswrapper[5107]: I0126 00:25:09.023935 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 00:25:09 crc kubenswrapper[5107]: I0126 00:25:09.492718 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vqqrf" event={"ID":"b52c3dde-b654-47d6-98ab-a794750ba7ea","Type":"ContainerStarted","Data":"1af0f9c194cfa8a4124036b39d062ae3f775803759e4d48ae052319d82d55d18"} Jan 26 00:25:09 crc kubenswrapper[5107]: I0126 00:25:09.494966 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"a8f17221-ddb9-4086-9482-deaa9be2efa0","Type":"ContainerStarted","Data":"7dee0eadb8ac8d0c0e5fe7c13d5daee2f04039529680cc6b2b8c0db49364628f"} Jan 26 00:25:10 crc kubenswrapper[5107]: I0126 00:25:10.514612 5107 generic.go:358] "Generic (PLEG): container finished" podID="b52c3dde-b654-47d6-98ab-a794750ba7ea" containerID="1af0f9c194cfa8a4124036b39d062ae3f775803759e4d48ae052319d82d55d18" exitCode=0 Jan 26 00:25:10 crc kubenswrapper[5107]: I0126 00:25:10.514784 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vqqrf" event={"ID":"b52c3dde-b654-47d6-98ab-a794750ba7ea","Type":"ContainerDied","Data":"1af0f9c194cfa8a4124036b39d062ae3f775803759e4d48ae052319d82d55d18"} Jan 26 00:25:11 crc kubenswrapper[5107]: I0126 00:25:11.554829 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vqqrf" event={"ID":"b52c3dde-b654-47d6-98ab-a794750ba7ea","Type":"ContainerStarted","Data":"f3a4f7f10373ce5c09668f5b733da9fbbc51ce6b82cad05c39886997ff12f999"} Jan 26 00:25:11 crc kubenswrapper[5107]: I0126 00:25:11.582025 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vqqrf" podStartSLOduration=43.086477485 podStartE2EDuration="44.582003745s" podCreationTimestamp="2026-01-26 00:24:27 +0000 UTC" firstStartedPulling="2026-01-26 00:25:07.450578619 +0000 UTC m=+952.368172965" lastFinishedPulling="2026-01-26 00:25:08.946104879 +0000 UTC m=+953.863699225" observedRunningTime="2026-01-26 00:25:11.577992849 +0000 UTC m=+956.495587215" watchObservedRunningTime="2026-01-26 00:25:11.582003745 +0000 UTC m=+956.499598091" Jan 26 00:25:18 crc kubenswrapper[5107]: I0126 00:25:18.487619 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-669c9f96b5-dgrdx" Jan 26 00:25:18 crc kubenswrapper[5107]: I0126 00:25:18.939127 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-vqqrf" Jan 26 00:25:18 crc kubenswrapper[5107]: I0126 00:25:18.939223 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vqqrf" Jan 26 00:25:18 crc kubenswrapper[5107]: I0126 00:25:18.990123 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vqqrf" Jan 26 00:25:19 crc kubenswrapper[5107]: I0126 00:25:19.588429 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vqqrf" Jan 26 00:25:19 crc kubenswrapper[5107]: I0126 00:25:19.642820 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vqqrf"] Jan 26 00:25:21 crc kubenswrapper[5107]: I0126 00:25:21.549186 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vqqrf" podUID="b52c3dde-b654-47d6-98ab-a794750ba7ea" containerName="registry-server" containerID="cri-o://f3a4f7f10373ce5c09668f5b733da9fbbc51ce6b82cad05c39886997ff12f999" gracePeriod=2 Jan 26 00:25:24 crc kubenswrapper[5107]: I0126 00:25:24.611509 5107 generic.go:358] "Generic (PLEG): container finished" podID="b52c3dde-b654-47d6-98ab-a794750ba7ea" containerID="f3a4f7f10373ce5c09668f5b733da9fbbc51ce6b82cad05c39886997ff12f999" exitCode=0 Jan 26 00:25:24 crc kubenswrapper[5107]: I0126 00:25:24.612793 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vqqrf" event={"ID":"b52c3dde-b654-47d6-98ab-a794750ba7ea","Type":"ContainerDied","Data":"f3a4f7f10373ce5c09668f5b733da9fbbc51ce6b82cad05c39886997ff12f999"} Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.531184 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.664379 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.664669 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.678713 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-sys-config\"" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.678713 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-ca\"" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.678851 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-x24l2\"" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.679149 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-global-ca\"" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.733448 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7f312ff2-e1e9-468a-8fa2-baf40b465121-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.733592 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxznx\" (UniqueName: \"kubernetes.io/projected/7f312ff2-e1e9-468a-8fa2-baf40b465121-kube-api-access-nxznx\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.733750 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f312ff2-e1e9-468a-8fa2-baf40b465121-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.733829 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-x24l2-pull\" (UniqueName: \"kubernetes.io/secret/7f312ff2-e1e9-468a-8fa2-baf40b465121-builder-dockercfg-x24l2-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.733907 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7f312ff2-e1e9-468a-8fa2-baf40b465121-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.734066 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f312ff2-e1e9-468a-8fa2-baf40b465121-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.734102 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7f312ff2-e1e9-468a-8fa2-baf40b465121-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.734172 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7f312ff2-e1e9-468a-8fa2-baf40b465121-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.734399 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7f312ff2-e1e9-468a-8fa2-baf40b465121-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.734455 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-x24l2-push\" (UniqueName: \"kubernetes.io/secret/7f312ff2-e1e9-468a-8fa2-baf40b465121-builder-dockercfg-x24l2-push\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.734526 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7f312ff2-e1e9-468a-8fa2-baf40b465121-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.734639 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7f312ff2-e1e9-468a-8fa2-baf40b465121-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.928388 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7f312ff2-e1e9-468a-8fa2-baf40b465121-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.928523 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nxznx\" (UniqueName: \"kubernetes.io/projected/7f312ff2-e1e9-468a-8fa2-baf40b465121-kube-api-access-nxznx\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.928684 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f312ff2-e1e9-468a-8fa2-baf40b465121-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.928773 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-x24l2-pull\" (UniqueName: \"kubernetes.io/secret/7f312ff2-e1e9-468a-8fa2-baf40b465121-builder-dockercfg-x24l2-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.928850 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7f312ff2-e1e9-468a-8fa2-baf40b465121-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.929066 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f312ff2-e1e9-468a-8fa2-baf40b465121-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.934050 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7f312ff2-e1e9-468a-8fa2-baf40b465121-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.934545 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7f312ff2-e1e9-468a-8fa2-baf40b465121-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.938810 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7f312ff2-e1e9-468a-8fa2-baf40b465121-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.939832 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f312ff2-e1e9-468a-8fa2-baf40b465121-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.940185 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7f312ff2-e1e9-468a-8fa2-baf40b465121-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.940448 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7f312ff2-e1e9-468a-8fa2-baf40b465121-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.940513 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7f312ff2-e1e9-468a-8fa2-baf40b465121-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.934764 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7f312ff2-e1e9-468a-8fa2-baf40b465121-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.941149 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-x24l2-push\" (UniqueName: \"kubernetes.io/secret/7f312ff2-e1e9-468a-8fa2-baf40b465121-builder-dockercfg-x24l2-push\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.941274 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7f312ff2-e1e9-468a-8fa2-baf40b465121-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.941402 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7f312ff2-e1e9-468a-8fa2-baf40b465121-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.941935 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7f312ff2-e1e9-468a-8fa2-baf40b465121-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.946540 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7f312ff2-e1e9-468a-8fa2-baf40b465121-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.948824 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7f312ff2-e1e9-468a-8fa2-baf40b465121-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.950091 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-x24l2-pull\" (UniqueName: \"kubernetes.io/secret/7f312ff2-e1e9-468a-8fa2-baf40b465121-builder-dockercfg-x24l2-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.951188 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f312ff2-e1e9-468a-8fa2-baf40b465121-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.963586 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-x24l2-push\" (UniqueName: \"kubernetes.io/secret/7f312ff2-e1e9-468a-8fa2-baf40b465121-builder-dockercfg-x24l2-push\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.984027 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxznx\" (UniqueName: \"kubernetes.io/projected/7f312ff2-e1e9-468a-8fa2-baf40b465121-kube-api-access-nxznx\") pod \"service-telemetry-operator-1-build\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:27 crc kubenswrapper[5107]: I0126 00:25:27.999367 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:29 crc kubenswrapper[5107]: E0126 00:25:29.523538 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f3a4f7f10373ce5c09668f5b733da9fbbc51ce6b82cad05c39886997ff12f999 is running failed: container process not found" containerID="f3a4f7f10373ce5c09668f5b733da9fbbc51ce6b82cad05c39886997ff12f999" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:25:29 crc kubenswrapper[5107]: E0126 00:25:29.524362 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f3a4f7f10373ce5c09668f5b733da9fbbc51ce6b82cad05c39886997ff12f999 is running failed: container process not found" containerID="f3a4f7f10373ce5c09668f5b733da9fbbc51ce6b82cad05c39886997ff12f999" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:25:29 crc kubenswrapper[5107]: E0126 00:25:29.526117 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f3a4f7f10373ce5c09668f5b733da9fbbc51ce6b82cad05c39886997ff12f999 is running failed: container process not found" containerID="f3a4f7f10373ce5c09668f5b733da9fbbc51ce6b82cad05c39886997ff12f999" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:25:29 crc kubenswrapper[5107]: E0126 00:25:29.526173 5107 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f3a4f7f10373ce5c09668f5b733da9fbbc51ce6b82cad05c39886997ff12f999 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-vqqrf" podUID="b52c3dde-b654-47d6-98ab-a794750ba7ea" containerName="registry-server" probeResult="unknown" Jan 26 00:25:31 crc kubenswrapper[5107]: I0126 00:25:31.390333 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vqqrf" Jan 26 00:25:31 crc kubenswrapper[5107]: I0126 00:25:31.501526 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b52c3dde-b654-47d6-98ab-a794750ba7ea-catalog-content\") pod \"b52c3dde-b654-47d6-98ab-a794750ba7ea\" (UID: \"b52c3dde-b654-47d6-98ab-a794750ba7ea\") " Jan 26 00:25:31 crc kubenswrapper[5107]: I0126 00:25:31.502393 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt74l\" (UniqueName: \"kubernetes.io/projected/b52c3dde-b654-47d6-98ab-a794750ba7ea-kube-api-access-gt74l\") pod \"b52c3dde-b654-47d6-98ab-a794750ba7ea\" (UID: \"b52c3dde-b654-47d6-98ab-a794750ba7ea\") " Jan 26 00:25:31 crc kubenswrapper[5107]: I0126 00:25:31.502622 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b52c3dde-b654-47d6-98ab-a794750ba7ea-utilities\") pod \"b52c3dde-b654-47d6-98ab-a794750ba7ea\" (UID: \"b52c3dde-b654-47d6-98ab-a794750ba7ea\") " Jan 26 00:25:31 crc kubenswrapper[5107]: I0126 00:25:31.504203 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b52c3dde-b654-47d6-98ab-a794750ba7ea-utilities" (OuterVolumeSpecName: "utilities") pod "b52c3dde-b654-47d6-98ab-a794750ba7ea" (UID: "b52c3dde-b654-47d6-98ab-a794750ba7ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:25:31 crc kubenswrapper[5107]: I0126 00:25:31.524715 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b52c3dde-b654-47d6-98ab-a794750ba7ea-kube-api-access-gt74l" (OuterVolumeSpecName: "kube-api-access-gt74l") pod "b52c3dde-b654-47d6-98ab-a794750ba7ea" (UID: "b52c3dde-b654-47d6-98ab-a794750ba7ea"). InnerVolumeSpecName "kube-api-access-gt74l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:25:31 crc kubenswrapper[5107]: I0126 00:25:31.574226 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b52c3dde-b654-47d6-98ab-a794750ba7ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b52c3dde-b654-47d6-98ab-a794750ba7ea" (UID: "b52c3dde-b654-47d6-98ab-a794750ba7ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:25:31 crc kubenswrapper[5107]: I0126 00:25:31.604524 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gt74l\" (UniqueName: \"kubernetes.io/projected/b52c3dde-b654-47d6-98ab-a794750ba7ea-kube-api-access-gt74l\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:31 crc kubenswrapper[5107]: I0126 00:25:31.604578 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b52c3dde-b654-47d6-98ab-a794750ba7ea-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:31 crc kubenswrapper[5107]: I0126 00:25:31.604595 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b52c3dde-b654-47d6-98ab-a794750ba7ea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:31 crc kubenswrapper[5107]: I0126 00:25:31.690879 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vqqrf" event={"ID":"b52c3dde-b654-47d6-98ab-a794750ba7ea","Type":"ContainerDied","Data":"6b7a54bdaf8c5359e82820304b70b72d9f88aeb4dc622cb0088b45f3d394daae"} Jan 26 00:25:31 crc kubenswrapper[5107]: I0126 00:25:31.690973 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vqqrf" Jan 26 00:25:31 crc kubenswrapper[5107]: I0126 00:25:31.691020 5107 scope.go:117] "RemoveContainer" containerID="f3a4f7f10373ce5c09668f5b733da9fbbc51ce6b82cad05c39886997ff12f999" Jan 26 00:25:31 crc kubenswrapper[5107]: I0126 00:25:31.778731 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vqqrf"] Jan 26 00:25:31 crc kubenswrapper[5107]: I0126 00:25:31.786772 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vqqrf"] Jan 26 00:25:32 crc kubenswrapper[5107]: I0126 00:25:32.122147 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b52c3dde-b654-47d6-98ab-a794750ba7ea" path="/var/lib/kubelet/pods/b52c3dde-b654-47d6-98ab-a794750ba7ea/volumes" Jan 26 00:25:34 crc kubenswrapper[5107]: I0126 00:25:34.171345 5107 scope.go:117] "RemoveContainer" containerID="1af0f9c194cfa8a4124036b39d062ae3f775803759e4d48ae052319d82d55d18" Jan 26 00:25:37 crc kubenswrapper[5107]: I0126 00:25:37.904758 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:25:40 crc kubenswrapper[5107]: I0126 00:25:40.074261 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 26 00:25:40 crc kubenswrapper[5107]: I0126 00:25:40.075415 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b52c3dde-b654-47d6-98ab-a794750ba7ea" containerName="extract-utilities" Jan 26 00:25:40 crc kubenswrapper[5107]: I0126 00:25:40.075433 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b52c3dde-b654-47d6-98ab-a794750ba7ea" containerName="extract-utilities" Jan 26 00:25:40 crc kubenswrapper[5107]: I0126 00:25:40.075450 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b52c3dde-b654-47d6-98ab-a794750ba7ea" containerName="registry-server" Jan 26 00:25:40 crc kubenswrapper[5107]: I0126 00:25:40.075456 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b52c3dde-b654-47d6-98ab-a794750ba7ea" containerName="registry-server" Jan 26 00:25:40 crc kubenswrapper[5107]: I0126 00:25:40.075478 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b52c3dde-b654-47d6-98ab-a794750ba7ea" containerName="extract-content" Jan 26 00:25:40 crc kubenswrapper[5107]: I0126 00:25:40.075484 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b52c3dde-b654-47d6-98ab-a794750ba7ea" containerName="extract-content" Jan 26 00:25:40 crc kubenswrapper[5107]: I0126 00:25:40.075671 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="b52c3dde-b654-47d6-98ab-a794750ba7ea" containerName="registry-server" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.568767 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.574054 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-global-ca\"" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.574751 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-ca\"" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.577581 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-sys-config\"" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.578365 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.715742 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e49e6c4b-b61e-40bf-8b52-2abf782b22df-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.715817 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e49e6c4b-b61e-40bf-8b52-2abf782b22df-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.715918 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8527d\" (UniqueName: \"kubernetes.io/projected/e49e6c4b-b61e-40bf-8b52-2abf782b22df-kube-api-access-8527d\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.715970 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-x24l2-pull\" (UniqueName: \"kubernetes.io/secret/e49e6c4b-b61e-40bf-8b52-2abf782b22df-builder-dockercfg-x24l2-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.716006 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e49e6c4b-b61e-40bf-8b52-2abf782b22df-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.716084 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e49e6c4b-b61e-40bf-8b52-2abf782b22df-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.716228 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e49e6c4b-b61e-40bf-8b52-2abf782b22df-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.716262 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e49e6c4b-b61e-40bf-8b52-2abf782b22df-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.716288 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e49e6c4b-b61e-40bf-8b52-2abf782b22df-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.716327 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-x24l2-push\" (UniqueName: \"kubernetes.io/secret/e49e6c4b-b61e-40bf-8b52-2abf782b22df-builder-dockercfg-x24l2-push\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.716370 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e49e6c4b-b61e-40bf-8b52-2abf782b22df-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.716428 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e49e6c4b-b61e-40bf-8b52-2abf782b22df-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.817718 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e49e6c4b-b61e-40bf-8b52-2abf782b22df-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.818085 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e49e6c4b-b61e-40bf-8b52-2abf782b22df-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.818254 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e49e6c4b-b61e-40bf-8b52-2abf782b22df-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.818271 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e49e6c4b-b61e-40bf-8b52-2abf782b22df-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.818490 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e49e6c4b-b61e-40bf-8b52-2abf782b22df-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.818606 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-x24l2-push\" (UniqueName: \"kubernetes.io/secret/e49e6c4b-b61e-40bf-8b52-2abf782b22df-builder-dockercfg-x24l2-push\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.818836 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e49e6c4b-b61e-40bf-8b52-2abf782b22df-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.819021 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e49e6c4b-b61e-40bf-8b52-2abf782b22df-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.819151 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e49e6c4b-b61e-40bf-8b52-2abf782b22df-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.819268 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e49e6c4b-b61e-40bf-8b52-2abf782b22df-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.819396 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e49e6c4b-b61e-40bf-8b52-2abf782b22df-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.819203 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e49e6c4b-b61e-40bf-8b52-2abf782b22df-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.819274 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e49e6c4b-b61e-40bf-8b52-2abf782b22df-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.819153 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e49e6c4b-b61e-40bf-8b52-2abf782b22df-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.819536 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e49e6c4b-b61e-40bf-8b52-2abf782b22df-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.819177 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e49e6c4b-b61e-40bf-8b52-2abf782b22df-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.819575 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e49e6c4b-b61e-40bf-8b52-2abf782b22df-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.819798 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8527d\" (UniqueName: \"kubernetes.io/projected/e49e6c4b-b61e-40bf-8b52-2abf782b22df-kube-api-access-8527d\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.819955 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-x24l2-pull\" (UniqueName: \"kubernetes.io/secret/e49e6c4b-b61e-40bf-8b52-2abf782b22df-builder-dockercfg-x24l2-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.820092 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e49e6c4b-b61e-40bf-8b52-2abf782b22df-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.820773 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e49e6c4b-b61e-40bf-8b52-2abf782b22df-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.827096 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-x24l2-pull\" (UniqueName: \"kubernetes.io/secret/e49e6c4b-b61e-40bf-8b52-2abf782b22df-builder-dockercfg-x24l2-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.827172 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-x24l2-push\" (UniqueName: \"kubernetes.io/secret/e49e6c4b-b61e-40bf-8b52-2abf782b22df-builder-dockercfg-x24l2-push\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.848808 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8527d\" (UniqueName: \"kubernetes.io/projected/e49e6c4b-b61e-40bf-8b52-2abf782b22df-kube-api-access-8527d\") pod \"service-telemetry-operator-2-build\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:44 crc kubenswrapper[5107]: I0126 00:25:44.898078 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:53 crc kubenswrapper[5107]: I0126 00:25:53.838197 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:25:54 crc kubenswrapper[5107]: I0126 00:25:54.197111 5107 scope.go:117] "RemoveContainer" containerID="52f3e0616225b2b76f89d7dd7c0090b20514c883fec9e5a8bd74f44e8bed2dcb" Jan 26 00:25:54 crc kubenswrapper[5107]: I0126 00:25:54.909397 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"7f312ff2-e1e9-468a-8fa2-baf40b465121","Type":"ContainerStarted","Data":"5aeaed1b73683b6165c503c3e6ef7ea199ac74807dbf3acc47d425ab43439c17"} Jan 26 00:25:55 crc kubenswrapper[5107]: I0126 00:25:55.643059 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 26 00:25:57 crc kubenswrapper[5107]: I0126 00:25:57.940549 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"e49e6c4b-b61e-40bf-8b52-2abf782b22df","Type":"ContainerStarted","Data":"986f7f46c8a5b0aed0d8345aebad3e2e3c4d2ccae284dd952d2fbe551bb1ad73"} Jan 26 00:25:58 crc kubenswrapper[5107]: I0126 00:25:58.957121 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-tds9n" event={"ID":"78721aca-96f9-4226-8912-52d79b0b6261","Type":"ContainerStarted","Data":"5b1e5f4c3fbef2b6cf21ae639918994d156d96f3de8c12a63143e869323b3846"} Jan 26 00:25:58 crc kubenswrapper[5107]: I0126 00:25:58.960572 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"a8f17221-ddb9-4086-9482-deaa9be2efa0","Type":"ContainerStarted","Data":"1fa7a6bc223bd0b31a8e1c0178b57171f01a70cfecf33e92743b68e19bed06fe"} Jan 26 00:25:58 crc kubenswrapper[5107]: I0126 00:25:58.987713 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-tds9n" podStartSLOduration=37.832178694 podStartE2EDuration="1m25.987689638s" podCreationTimestamp="2026-01-26 00:24:33 +0000 UTC" firstStartedPulling="2026-01-26 00:25:06.017351472 +0000 UTC m=+950.934945818" lastFinishedPulling="2026-01-26 00:25:54.172862416 +0000 UTC m=+999.090456762" observedRunningTime="2026-01-26 00:25:58.980620994 +0000 UTC m=+1003.898215350" watchObservedRunningTime="2026-01-26 00:25:58.987689638 +0000 UTC m=+1003.905283984" Jan 26 00:25:59 crc kubenswrapper[5107]: I0126 00:25:59.150373 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 00:25:59 crc kubenswrapper[5107]: I0126 00:25:59.305385 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 00:26:00 crc kubenswrapper[5107]: I0126 00:26:00.183307 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489786-8l48g"] Jan 26 00:26:00 crc kubenswrapper[5107]: I0126 00:26:00.229084 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489786-8l48g"] Jan 26 00:26:00 crc kubenswrapper[5107]: I0126 00:26:00.229260 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489786-8l48g" Jan 26 00:26:00 crc kubenswrapper[5107]: I0126 00:26:00.234340 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-96gbq\"" Jan 26 00:26:00 crc kubenswrapper[5107]: I0126 00:26:00.236274 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:26:00 crc kubenswrapper[5107]: I0126 00:26:00.243356 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:26:00 crc kubenswrapper[5107]: I0126 00:26:00.290129 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vbb5\" (UniqueName: \"kubernetes.io/projected/9f5d729a-317b-4713-a3c3-c5cc309edc5e-kube-api-access-9vbb5\") pod \"auto-csr-approver-29489786-8l48g\" (UID: \"9f5d729a-317b-4713-a3c3-c5cc309edc5e\") " pod="openshift-infra/auto-csr-approver-29489786-8l48g" Jan 26 00:26:00 crc kubenswrapper[5107]: I0126 00:26:00.391865 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9vbb5\" (UniqueName: \"kubernetes.io/projected/9f5d729a-317b-4713-a3c3-c5cc309edc5e-kube-api-access-9vbb5\") pod \"auto-csr-approver-29489786-8l48g\" (UID: \"9f5d729a-317b-4713-a3c3-c5cc309edc5e\") " pod="openshift-infra/auto-csr-approver-29489786-8l48g" Jan 26 00:26:00 crc kubenswrapper[5107]: I0126 00:26:00.416481 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vbb5\" (UniqueName: \"kubernetes.io/projected/9f5d729a-317b-4713-a3c3-c5cc309edc5e-kube-api-access-9vbb5\") pod \"auto-csr-approver-29489786-8l48g\" (UID: \"9f5d729a-317b-4713-a3c3-c5cc309edc5e\") " pod="openshift-infra/auto-csr-approver-29489786-8l48g" Jan 26 00:26:00 crc kubenswrapper[5107]: I0126 00:26:00.558315 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489786-8l48g" Jan 26 00:26:01 crc kubenswrapper[5107]: I0126 00:26:01.151252 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489786-8l48g"] Jan 26 00:26:01 crc kubenswrapper[5107]: I0126 00:26:01.989464 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489786-8l48g" event={"ID":"9f5d729a-317b-4713-a3c3-c5cc309edc5e","Type":"ContainerStarted","Data":"b726c4d8f6de668a67be8d0eac2dfe736c09b8c27d7cf40c40eaa80ccec002be"} Jan 26 00:26:03 crc kubenswrapper[5107]: I0126 00:26:03.005101 5107 generic.go:358] "Generic (PLEG): container finished" podID="a8f17221-ddb9-4086-9482-deaa9be2efa0" containerID="1fa7a6bc223bd0b31a8e1c0178b57171f01a70cfecf33e92743b68e19bed06fe" exitCode=0 Jan 26 00:26:03 crc kubenswrapper[5107]: I0126 00:26:03.005192 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"a8f17221-ddb9-4086-9482-deaa9be2efa0","Type":"ContainerDied","Data":"1fa7a6bc223bd0b31a8e1c0178b57171f01a70cfecf33e92743b68e19bed06fe"} Jan 26 00:26:04 crc kubenswrapper[5107]: I0126 00:26:04.291253 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-j6gmr"] Jan 26 00:26:04 crc kubenswrapper[5107]: I0126 00:26:04.321220 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-j6gmr"] Jan 26 00:26:04 crc kubenswrapper[5107]: I0126 00:26:04.321398 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-j6gmr" Jan 26 00:26:04 crc kubenswrapper[5107]: I0126 00:26:04.324694 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Jan 26 00:26:04 crc kubenswrapper[5107]: I0126 00:26:04.325380 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:26:04 crc kubenswrapper[5107]: I0126 00:26:04.326619 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-q26fq\"" Jan 26 00:26:04 crc kubenswrapper[5107]: I0126 00:26:04.406661 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6097a808-7fcb-4512-9054-3de1585157e7-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-j6gmr\" (UID: \"6097a808-7fcb-4512-9054-3de1585157e7\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-j6gmr" Jan 26 00:26:04 crc kubenswrapper[5107]: I0126 00:26:04.406751 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcfjl\" (UniqueName: \"kubernetes.io/projected/6097a808-7fcb-4512-9054-3de1585157e7-kube-api-access-bcfjl\") pod \"cert-manager-webhook-7894b5b9b4-j6gmr\" (UID: \"6097a808-7fcb-4512-9054-3de1585157e7\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-j6gmr" Jan 26 00:26:04 crc kubenswrapper[5107]: I0126 00:26:04.507768 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6097a808-7fcb-4512-9054-3de1585157e7-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-j6gmr\" (UID: \"6097a808-7fcb-4512-9054-3de1585157e7\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-j6gmr" Jan 26 00:26:04 crc kubenswrapper[5107]: I0126 00:26:04.507843 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bcfjl\" (UniqueName: \"kubernetes.io/projected/6097a808-7fcb-4512-9054-3de1585157e7-kube-api-access-bcfjl\") pod \"cert-manager-webhook-7894b5b9b4-j6gmr\" (UID: \"6097a808-7fcb-4512-9054-3de1585157e7\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-j6gmr" Jan 26 00:26:04 crc kubenswrapper[5107]: I0126 00:26:04.534760 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6097a808-7fcb-4512-9054-3de1585157e7-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-j6gmr\" (UID: \"6097a808-7fcb-4512-9054-3de1585157e7\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-j6gmr" Jan 26 00:26:04 crc kubenswrapper[5107]: I0126 00:26:04.535093 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcfjl\" (UniqueName: \"kubernetes.io/projected/6097a808-7fcb-4512-9054-3de1585157e7-kube-api-access-bcfjl\") pod \"cert-manager-webhook-7894b5b9b4-j6gmr\" (UID: \"6097a808-7fcb-4512-9054-3de1585157e7\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-j6gmr" Jan 26 00:26:04 crc kubenswrapper[5107]: I0126 00:26:04.643438 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-j6gmr" Jan 26 00:26:05 crc kubenswrapper[5107]: I0126 00:26:05.100224 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-hd5l4"] Jan 26 00:26:05 crc kubenswrapper[5107]: I0126 00:26:05.168790 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-hd5l4" Jan 26 00:26:05 crc kubenswrapper[5107]: I0126 00:26:05.174096 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-nf2qv\"" Jan 26 00:26:05 crc kubenswrapper[5107]: I0126 00:26:05.174923 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-hd5l4"] Jan 26 00:26:05 crc kubenswrapper[5107]: I0126 00:26:05.260087 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl8fk\" (UniqueName: \"kubernetes.io/projected/10f941c4-c1f6-4bf3-9b29-581e7a206ef8-kube-api-access-fl8fk\") pod \"cert-manager-cainjector-7dbf76d5c8-hd5l4\" (UID: \"10f941c4-c1f6-4bf3-9b29-581e7a206ef8\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-hd5l4" Jan 26 00:26:05 crc kubenswrapper[5107]: I0126 00:26:05.260312 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/10f941c4-c1f6-4bf3-9b29-581e7a206ef8-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-hd5l4\" (UID: \"10f941c4-c1f6-4bf3-9b29-581e7a206ef8\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-hd5l4" Jan 26 00:26:05 crc kubenswrapper[5107]: I0126 00:26:05.361468 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fl8fk\" (UniqueName: \"kubernetes.io/projected/10f941c4-c1f6-4bf3-9b29-581e7a206ef8-kube-api-access-fl8fk\") pod \"cert-manager-cainjector-7dbf76d5c8-hd5l4\" (UID: \"10f941c4-c1f6-4bf3-9b29-581e7a206ef8\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-hd5l4" Jan 26 00:26:05 crc kubenswrapper[5107]: I0126 00:26:05.361978 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/10f941c4-c1f6-4bf3-9b29-581e7a206ef8-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-hd5l4\" (UID: \"10f941c4-c1f6-4bf3-9b29-581e7a206ef8\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-hd5l4" Jan 26 00:26:05 crc kubenswrapper[5107]: I0126 00:26:05.414306 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl8fk\" (UniqueName: \"kubernetes.io/projected/10f941c4-c1f6-4bf3-9b29-581e7a206ef8-kube-api-access-fl8fk\") pod \"cert-manager-cainjector-7dbf76d5c8-hd5l4\" (UID: \"10f941c4-c1f6-4bf3-9b29-581e7a206ef8\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-hd5l4" Jan 26 00:26:05 crc kubenswrapper[5107]: I0126 00:26:05.421737 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/10f941c4-c1f6-4bf3-9b29-581e7a206ef8-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-hd5l4\" (UID: \"10f941c4-c1f6-4bf3-9b29-581e7a206ef8\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-hd5l4" Jan 26 00:26:05 crc kubenswrapper[5107]: I0126 00:26:05.486634 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-hd5l4" Jan 26 00:26:10 crc kubenswrapper[5107]: I0126 00:26:10.537933 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-d2ls2"] Jan 26 00:26:10 crc kubenswrapper[5107]: I0126 00:26:10.717122 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-d2ls2"] Jan 26 00:26:10 crc kubenswrapper[5107]: I0126 00:26:10.717420 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-d2ls2" Jan 26 00:26:10 crc kubenswrapper[5107]: I0126 00:26:10.721757 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-6zkf5\"" Jan 26 00:26:10 crc kubenswrapper[5107]: I0126 00:26:10.748639 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1e19fa3e-ab08-4129-9714-1ba2e512aa68-bound-sa-token\") pod \"cert-manager-858d87f86b-d2ls2\" (UID: \"1e19fa3e-ab08-4129-9714-1ba2e512aa68\") " pod="cert-manager/cert-manager-858d87f86b-d2ls2" Jan 26 00:26:10 crc kubenswrapper[5107]: I0126 00:26:10.748731 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgh6r\" (UniqueName: \"kubernetes.io/projected/1e19fa3e-ab08-4129-9714-1ba2e512aa68-kube-api-access-qgh6r\") pod \"cert-manager-858d87f86b-d2ls2\" (UID: \"1e19fa3e-ab08-4129-9714-1ba2e512aa68\") " pod="cert-manager/cert-manager-858d87f86b-d2ls2" Jan 26 00:26:10 crc kubenswrapper[5107]: I0126 00:26:10.850680 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1e19fa3e-ab08-4129-9714-1ba2e512aa68-bound-sa-token\") pod \"cert-manager-858d87f86b-d2ls2\" (UID: \"1e19fa3e-ab08-4129-9714-1ba2e512aa68\") " pod="cert-manager/cert-manager-858d87f86b-d2ls2" Jan 26 00:26:10 crc kubenswrapper[5107]: I0126 00:26:10.850790 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qgh6r\" (UniqueName: \"kubernetes.io/projected/1e19fa3e-ab08-4129-9714-1ba2e512aa68-kube-api-access-qgh6r\") pod \"cert-manager-858d87f86b-d2ls2\" (UID: \"1e19fa3e-ab08-4129-9714-1ba2e512aa68\") " pod="cert-manager/cert-manager-858d87f86b-d2ls2" Jan 26 00:26:10 crc kubenswrapper[5107]: I0126 00:26:10.875822 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1e19fa3e-ab08-4129-9714-1ba2e512aa68-bound-sa-token\") pod \"cert-manager-858d87f86b-d2ls2\" (UID: \"1e19fa3e-ab08-4129-9714-1ba2e512aa68\") " pod="cert-manager/cert-manager-858d87f86b-d2ls2" Jan 26 00:26:10 crc kubenswrapper[5107]: I0126 00:26:10.875916 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgh6r\" (UniqueName: \"kubernetes.io/projected/1e19fa3e-ab08-4129-9714-1ba2e512aa68-kube-api-access-qgh6r\") pod \"cert-manager-858d87f86b-d2ls2\" (UID: \"1e19fa3e-ab08-4129-9714-1ba2e512aa68\") " pod="cert-manager/cert-manager-858d87f86b-d2ls2" Jan 26 00:26:11 crc kubenswrapper[5107]: I0126 00:26:11.045976 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-d2ls2" Jan 26 00:26:12 crc kubenswrapper[5107]: I0126 00:26:12.007744 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-hd5l4"] Jan 26 00:26:12 crc kubenswrapper[5107]: I0126 00:26:12.331585 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-j6gmr"] Jan 26 00:26:15 crc kubenswrapper[5107]: W0126 00:26:15.176281 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10f941c4_c1f6_4bf3_9b29_581e7a206ef8.slice/crio-54eec69057a4bc65f877424ac9ae37eb5c8330bde4b1e7cb73c399084456121d WatchSource:0}: Error finding container 54eec69057a4bc65f877424ac9ae37eb5c8330bde4b1e7cb73c399084456121d: Status 404 returned error can't find the container with id 54eec69057a4bc65f877424ac9ae37eb5c8330bde4b1e7cb73c399084456121d Jan 26 00:26:15 crc kubenswrapper[5107]: W0126 00:26:15.178169 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6097a808_7fcb_4512_9054_3de1585157e7.slice/crio-66e0e6bcd2ce8d1b98744c578295b07dab72f3b12cfc84353d472a757878885b WatchSource:0}: Error finding container 66e0e6bcd2ce8d1b98744c578295b07dab72f3b12cfc84353d472a757878885b: Status 404 returned error can't find the container with id 66e0e6bcd2ce8d1b98744c578295b07dab72f3b12cfc84353d472a757878885b Jan 26 00:26:15 crc kubenswrapper[5107]: I0126 00:26:15.385274 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-hd5l4" event={"ID":"10f941c4-c1f6-4bf3-9b29-581e7a206ef8","Type":"ContainerStarted","Data":"54eec69057a4bc65f877424ac9ae37eb5c8330bde4b1e7cb73c399084456121d"} Jan 26 00:26:15 crc kubenswrapper[5107]: I0126 00:26:15.386485 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-j6gmr" event={"ID":"6097a808-7fcb-4512-9054-3de1585157e7","Type":"ContainerStarted","Data":"66e0e6bcd2ce8d1b98744c578295b07dab72f3b12cfc84353d472a757878885b"} Jan 26 00:26:16 crc kubenswrapper[5107]: I0126 00:26:16.092002 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-d2ls2"] Jan 26 00:26:16 crc kubenswrapper[5107]: W0126 00:26:16.262817 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e19fa3e_ab08_4129_9714_1ba2e512aa68.slice/crio-6174f5031291c566dbd7a001b9b42becff993c2799902dd023ce8fc4bd220d2a WatchSource:0}: Error finding container 6174f5031291c566dbd7a001b9b42becff993c2799902dd023ce8fc4bd220d2a: Status 404 returned error can't find the container with id 6174f5031291c566dbd7a001b9b42becff993c2799902dd023ce8fc4bd220d2a Jan 26 00:26:16 crc kubenswrapper[5107]: I0126 00:26:16.399968 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-d2ls2" event={"ID":"1e19fa3e-ab08-4129-9714-1ba2e512aa68","Type":"ContainerStarted","Data":"6174f5031291c566dbd7a001b9b42becff993c2799902dd023ce8fc4bd220d2a"} Jan 26 00:26:17 crc kubenswrapper[5107]: I0126 00:26:17.421923 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"7f312ff2-e1e9-468a-8fa2-baf40b465121","Type":"ContainerStarted","Data":"762870b89ac934ebd44d454a7610d92b866bf0a7d914be3fb6bad4604ac19b33"} Jan 26 00:26:17 crc kubenswrapper[5107]: I0126 00:26:17.422062 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="7f312ff2-e1e9-468a-8fa2-baf40b465121" containerName="manage-dockerfile" containerID="cri-o://762870b89ac934ebd44d454a7610d92b866bf0a7d914be3fb6bad4604ac19b33" gracePeriod=30 Jan 26 00:26:17 crc kubenswrapper[5107]: I0126 00:26:17.427910 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"e49e6c4b-b61e-40bf-8b52-2abf782b22df","Type":"ContainerStarted","Data":"af4b921360521519f75991230399c35963e805f1eb1ece6215cbaf67286752e3"} Jan 26 00:26:17 crc kubenswrapper[5107]: I0126 00:26:17.431302 5107 generic.go:358] "Generic (PLEG): container finished" podID="9f5d729a-317b-4713-a3c3-c5cc309edc5e" containerID="a8cf6d7bd24a367b398879ebee3516aaf3b4804a916e585d85d99f70fe28e350" exitCode=0 Jan 26 00:26:17 crc kubenswrapper[5107]: I0126 00:26:17.432448 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489786-8l48g" event={"ID":"9f5d729a-317b-4713-a3c3-c5cc309edc5e","Type":"ContainerDied","Data":"a8cf6d7bd24a367b398879ebee3516aaf3b4804a916e585d85d99f70fe28e350"} Jan 26 00:26:17 crc kubenswrapper[5107]: I0126 00:26:17.435810 5107 generic.go:358] "Generic (PLEG): container finished" podID="a8f17221-ddb9-4086-9482-deaa9be2efa0" containerID="5006866b4ce5f2941568a096e249c30e2184768227342a64447cf7abad80e831" exitCode=0 Jan 26 00:26:17 crc kubenswrapper[5107]: I0126 00:26:17.435905 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"a8f17221-ddb9-4086-9482-deaa9be2efa0","Type":"ContainerDied","Data":"5006866b4ce5f2941568a096e249c30e2184768227342a64447cf7abad80e831"} Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.169648 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_7f312ff2-e1e9-468a-8fa2-baf40b465121/manage-dockerfile/0.log" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.171068 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.186659 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7f312ff2-e1e9-468a-8fa2-baf40b465121-container-storage-root\") pod \"7f312ff2-e1e9-468a-8fa2-baf40b465121\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.186911 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7f312ff2-e1e9-468a-8fa2-baf40b465121-buildcachedir\") pod \"7f312ff2-e1e9-468a-8fa2-baf40b465121\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.187060 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f312ff2-e1e9-468a-8fa2-baf40b465121-build-ca-bundles\") pod \"7f312ff2-e1e9-468a-8fa2-baf40b465121\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.187122 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f312ff2-e1e9-468a-8fa2-baf40b465121-build-proxy-ca-bundles\") pod \"7f312ff2-e1e9-468a-8fa2-baf40b465121\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.187156 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7f312ff2-e1e9-468a-8fa2-baf40b465121-build-blob-cache\") pod \"7f312ff2-e1e9-468a-8fa2-baf40b465121\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.187212 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7f312ff2-e1e9-468a-8fa2-baf40b465121-node-pullsecrets\") pod \"7f312ff2-e1e9-468a-8fa2-baf40b465121\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.187251 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7f312ff2-e1e9-468a-8fa2-baf40b465121-build-system-configs\") pod \"7f312ff2-e1e9-468a-8fa2-baf40b465121\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.187336 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxznx\" (UniqueName: \"kubernetes.io/projected/7f312ff2-e1e9-468a-8fa2-baf40b465121-kube-api-access-nxznx\") pod \"7f312ff2-e1e9-468a-8fa2-baf40b465121\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.187389 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-x24l2-push\" (UniqueName: \"kubernetes.io/secret/7f312ff2-e1e9-468a-8fa2-baf40b465121-builder-dockercfg-x24l2-push\") pod \"7f312ff2-e1e9-468a-8fa2-baf40b465121\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.187421 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-x24l2-pull\" (UniqueName: \"kubernetes.io/secret/7f312ff2-e1e9-468a-8fa2-baf40b465121-builder-dockercfg-x24l2-pull\") pod \"7f312ff2-e1e9-468a-8fa2-baf40b465121\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.187500 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7f312ff2-e1e9-468a-8fa2-baf40b465121-container-storage-run\") pod \"7f312ff2-e1e9-468a-8fa2-baf40b465121\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.187563 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7f312ff2-e1e9-468a-8fa2-baf40b465121-buildworkdir\") pod \"7f312ff2-e1e9-468a-8fa2-baf40b465121\" (UID: \"7f312ff2-e1e9-468a-8fa2-baf40b465121\") " Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.188708 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f312ff2-e1e9-468a-8fa2-baf40b465121-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "7f312ff2-e1e9-468a-8fa2-baf40b465121" (UID: "7f312ff2-e1e9-468a-8fa2-baf40b465121"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.188749 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f312ff2-e1e9-468a-8fa2-baf40b465121-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "7f312ff2-e1e9-468a-8fa2-baf40b465121" (UID: "7f312ff2-e1e9-468a-8fa2-baf40b465121"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.188895 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f312ff2-e1e9-468a-8fa2-baf40b465121-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "7f312ff2-e1e9-468a-8fa2-baf40b465121" (UID: "7f312ff2-e1e9-468a-8fa2-baf40b465121"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.189373 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f312ff2-e1e9-468a-8fa2-baf40b465121-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "7f312ff2-e1e9-468a-8fa2-baf40b465121" (UID: "7f312ff2-e1e9-468a-8fa2-baf40b465121"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.189421 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f312ff2-e1e9-468a-8fa2-baf40b465121-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "7f312ff2-e1e9-468a-8fa2-baf40b465121" (UID: "7f312ff2-e1e9-468a-8fa2-baf40b465121"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.189458 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f312ff2-e1e9-468a-8fa2-baf40b465121-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "7f312ff2-e1e9-468a-8fa2-baf40b465121" (UID: "7f312ff2-e1e9-468a-8fa2-baf40b465121"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.189560 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f312ff2-e1e9-468a-8fa2-baf40b465121-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "7f312ff2-e1e9-468a-8fa2-baf40b465121" (UID: "7f312ff2-e1e9-468a-8fa2-baf40b465121"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.189583 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f312ff2-e1e9-468a-8fa2-baf40b465121-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "7f312ff2-e1e9-468a-8fa2-baf40b465121" (UID: "7f312ff2-e1e9-468a-8fa2-baf40b465121"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.189848 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f312ff2-e1e9-468a-8fa2-baf40b465121-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "7f312ff2-e1e9-468a-8fa2-baf40b465121" (UID: "7f312ff2-e1e9-468a-8fa2-baf40b465121"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.201920 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f312ff2-e1e9-468a-8fa2-baf40b465121-kube-api-access-nxznx" (OuterVolumeSpecName: "kube-api-access-nxznx") pod "7f312ff2-e1e9-468a-8fa2-baf40b465121" (UID: "7f312ff2-e1e9-468a-8fa2-baf40b465121"). InnerVolumeSpecName "kube-api-access-nxznx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.210616 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f312ff2-e1e9-468a-8fa2-baf40b465121-builder-dockercfg-x24l2-pull" (OuterVolumeSpecName: "builder-dockercfg-x24l2-pull") pod "7f312ff2-e1e9-468a-8fa2-baf40b465121" (UID: "7f312ff2-e1e9-468a-8fa2-baf40b465121"). InnerVolumeSpecName "builder-dockercfg-x24l2-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.217972 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f312ff2-e1e9-468a-8fa2-baf40b465121-builder-dockercfg-x24l2-push" (OuterVolumeSpecName: "builder-dockercfg-x24l2-push") pod "7f312ff2-e1e9-468a-8fa2-baf40b465121" (UID: "7f312ff2-e1e9-468a-8fa2-baf40b465121"). InnerVolumeSpecName "builder-dockercfg-x24l2-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.289905 5107 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f312ff2-e1e9-468a-8fa2-baf40b465121-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.289946 5107 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f312ff2-e1e9-468a-8fa2-baf40b465121-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.289965 5107 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7f312ff2-e1e9-468a-8fa2-baf40b465121-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.289977 5107 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7f312ff2-e1e9-468a-8fa2-baf40b465121-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.289991 5107 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7f312ff2-e1e9-468a-8fa2-baf40b465121-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.290002 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nxznx\" (UniqueName: \"kubernetes.io/projected/7f312ff2-e1e9-468a-8fa2-baf40b465121-kube-api-access-nxznx\") on node \"crc\" DevicePath \"\"" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.290016 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-x24l2-push\" (UniqueName: \"kubernetes.io/secret/7f312ff2-e1e9-468a-8fa2-baf40b465121-builder-dockercfg-x24l2-push\") on node \"crc\" DevicePath \"\"" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.290032 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-x24l2-pull\" (UniqueName: \"kubernetes.io/secret/7f312ff2-e1e9-468a-8fa2-baf40b465121-builder-dockercfg-x24l2-pull\") on node \"crc\" DevicePath \"\"" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.290047 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7f312ff2-e1e9-468a-8fa2-baf40b465121-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.290059 5107 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7f312ff2-e1e9-468a-8fa2-baf40b465121-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.290069 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7f312ff2-e1e9-468a-8fa2-baf40b465121-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.290081 5107 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7f312ff2-e1e9-468a-8fa2-baf40b465121-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.475156 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_7f312ff2-e1e9-468a-8fa2-baf40b465121/manage-dockerfile/0.log" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.475246 5107 generic.go:358] "Generic (PLEG): container finished" podID="7f312ff2-e1e9-468a-8fa2-baf40b465121" containerID="762870b89ac934ebd44d454a7610d92b866bf0a7d914be3fb6bad4604ac19b33" exitCode=1 Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.475708 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.475951 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"7f312ff2-e1e9-468a-8fa2-baf40b465121","Type":"ContainerDied","Data":"762870b89ac934ebd44d454a7610d92b866bf0a7d914be3fb6bad4604ac19b33"} Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.476005 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"7f312ff2-e1e9-468a-8fa2-baf40b465121","Type":"ContainerDied","Data":"5aeaed1b73683b6165c503c3e6ef7ea199ac74807dbf3acc47d425ab43439c17"} Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.476027 5107 scope.go:117] "RemoveContainer" containerID="762870b89ac934ebd44d454a7610d92b866bf0a7d914be3fb6bad4604ac19b33" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.506452 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"a8f17221-ddb9-4086-9482-deaa9be2efa0","Type":"ContainerStarted","Data":"30389d12807b36c15d38e57a9353e43602056367978bc652f20ee76241641c9d"} Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.525326 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.544741 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.559284 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=22.044602297 podStartE2EDuration="1m11.559243119s" podCreationTimestamp="2026-01-26 00:25:07 +0000 UTC" firstStartedPulling="2026-01-26 00:25:09.059778991 +0000 UTC m=+953.977373347" lastFinishedPulling="2026-01-26 00:25:58.574419823 +0000 UTC m=+1003.492014169" observedRunningTime="2026-01-26 00:26:18.551410262 +0000 UTC m=+1023.469004608" watchObservedRunningTime="2026-01-26 00:26:18.559243119 +0000 UTC m=+1023.476837465" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.578195 5107 scope.go:117] "RemoveContainer" containerID="762870b89ac934ebd44d454a7610d92b866bf0a7d914be3fb6bad4604ac19b33" Jan 26 00:26:18 crc kubenswrapper[5107]: E0126 00:26:18.580282 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"762870b89ac934ebd44d454a7610d92b866bf0a7d914be3fb6bad4604ac19b33\": container with ID starting with 762870b89ac934ebd44d454a7610d92b866bf0a7d914be3fb6bad4604ac19b33 not found: ID does not exist" containerID="762870b89ac934ebd44d454a7610d92b866bf0a7d914be3fb6bad4604ac19b33" Jan 26 00:26:18 crc kubenswrapper[5107]: I0126 00:26:18.580339 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"762870b89ac934ebd44d454a7610d92b866bf0a7d914be3fb6bad4604ac19b33"} err="failed to get container status \"762870b89ac934ebd44d454a7610d92b866bf0a7d914be3fb6bad4604ac19b33\": rpc error: code = NotFound desc = could not find container \"762870b89ac934ebd44d454a7610d92b866bf0a7d914be3fb6bad4604ac19b33\": container with ID starting with 762870b89ac934ebd44d454a7610d92b866bf0a7d914be3fb6bad4604ac19b33 not found: ID does not exist" Jan 26 00:26:19 crc kubenswrapper[5107]: I0126 00:26:19.000462 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489786-8l48g" Jan 26 00:26:19 crc kubenswrapper[5107]: I0126 00:26:19.103711 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vbb5\" (UniqueName: \"kubernetes.io/projected/9f5d729a-317b-4713-a3c3-c5cc309edc5e-kube-api-access-9vbb5\") pod \"9f5d729a-317b-4713-a3c3-c5cc309edc5e\" (UID: \"9f5d729a-317b-4713-a3c3-c5cc309edc5e\") " Jan 26 00:26:19 crc kubenswrapper[5107]: I0126 00:26:19.113180 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f5d729a-317b-4713-a3c3-c5cc309edc5e-kube-api-access-9vbb5" (OuterVolumeSpecName: "kube-api-access-9vbb5") pod "9f5d729a-317b-4713-a3c3-c5cc309edc5e" (UID: "9f5d729a-317b-4713-a3c3-c5cc309edc5e"). InnerVolumeSpecName "kube-api-access-9vbb5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:26:19 crc kubenswrapper[5107]: I0126 00:26:19.205237 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vbb5\" (UniqueName: \"kubernetes.io/projected/9f5d729a-317b-4713-a3c3-c5cc309edc5e-kube-api-access-9vbb5\") on node \"crc\" DevicePath \"\"" Jan 26 00:26:19 crc kubenswrapper[5107]: I0126 00:26:19.563652 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489786-8l48g" Jan 26 00:26:19 crc kubenswrapper[5107]: I0126 00:26:19.563679 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489786-8l48g" event={"ID":"9f5d729a-317b-4713-a3c3-c5cc309edc5e","Type":"ContainerDied","Data":"b726c4d8f6de668a67be8d0eac2dfe736c09b8c27d7cf40c40eaa80ccec002be"} Jan 26 00:26:19 crc kubenswrapper[5107]: I0126 00:26:19.563747 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b726c4d8f6de668a67be8d0eac2dfe736c09b8c27d7cf40c40eaa80ccec002be" Jan 26 00:26:20 crc kubenswrapper[5107]: I0126 00:26:20.095620 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489780-qn6md"] Jan 26 00:26:20 crc kubenswrapper[5107]: I0126 00:26:20.104275 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489780-qn6md"] Jan 26 00:26:20 crc kubenswrapper[5107]: I0126 00:26:20.127574 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52b61f87-e656-4450-af3e-26b5c5454e30" path="/var/lib/kubelet/pods/52b61f87-e656-4450-af3e-26b5c5454e30/volumes" Jan 26 00:26:20 crc kubenswrapper[5107]: I0126 00:26:20.128538 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f312ff2-e1e9-468a-8fa2-baf40b465121" path="/var/lib/kubelet/pods/7f312ff2-e1e9-468a-8fa2-baf40b465121/volumes" Jan 26 00:26:23 crc kubenswrapper[5107]: I0126 00:26:23.511056 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:26:26 crc kubenswrapper[5107]: I0126 00:26:26.523224 5107 generic.go:358] "Generic (PLEG): container finished" podID="e49e6c4b-b61e-40bf-8b52-2abf782b22df" containerID="af4b921360521519f75991230399c35963e805f1eb1ece6215cbaf67286752e3" exitCode=0 Jan 26 00:26:26 crc kubenswrapper[5107]: I0126 00:26:26.534224 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"e49e6c4b-b61e-40bf-8b52-2abf782b22df","Type":"ContainerDied","Data":"af4b921360521519f75991230399c35963e805f1eb1ece6215cbaf67286752e3"} Jan 26 00:26:29 crc kubenswrapper[5107]: I0126 00:26:29.000827 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="a8f17221-ddb9-4086-9482-deaa9be2efa0" containerName="elasticsearch" probeResult="failure" output=< Jan 26 00:26:29 crc kubenswrapper[5107]: {"timestamp": "2026-01-26T00:26:28+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 26 00:26:29 crc kubenswrapper[5107]: > Jan 26 00:26:33 crc kubenswrapper[5107]: I0126 00:26:33.620874 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="a8f17221-ddb9-4086-9482-deaa9be2efa0" containerName="elasticsearch" probeResult="failure" output=< Jan 26 00:26:33 crc kubenswrapper[5107]: {"timestamp": "2026-01-26T00:26:33+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 26 00:26:33 crc kubenswrapper[5107]: > Jan 26 00:26:39 crc kubenswrapper[5107]: I0126 00:26:39.717720 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="a8f17221-ddb9-4086-9482-deaa9be2efa0" containerName="elasticsearch" probeResult="failure" output=< Jan 26 00:26:39 crc kubenswrapper[5107]: {"timestamp": "2026-01-26T00:26:39+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 26 00:26:39 crc kubenswrapper[5107]: > Jan 26 00:26:43 crc kubenswrapper[5107]: I0126 00:26:43.627051 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="a8f17221-ddb9-4086-9482-deaa9be2efa0" containerName="elasticsearch" probeResult="failure" output=< Jan 26 00:26:43 crc kubenswrapper[5107]: {"timestamp": "2026-01-26T00:26:43+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 26 00:26:43 crc kubenswrapper[5107]: > Jan 26 00:26:47 crc kubenswrapper[5107]: I0126 00:26:47.794075 5107 generic.go:358] "Generic (PLEG): container finished" podID="e49e6c4b-b61e-40bf-8b52-2abf782b22df" containerID="94a1d7f837db12b681156f0fcdb6dca51aac330d92c202acd3b60498c38c949b" exitCode=0 Jan 26 00:26:47 crc kubenswrapper[5107]: I0126 00:26:47.794135 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"e49e6c4b-b61e-40bf-8b52-2abf782b22df","Type":"ContainerDied","Data":"94a1d7f837db12b681156f0fcdb6dca51aac330d92c202acd3b60498c38c949b"} Jan 26 00:26:47 crc kubenswrapper[5107]: I0126 00:26:47.839645 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_e49e6c4b-b61e-40bf-8b52-2abf782b22df/manage-dockerfile/0.log" Jan 26 00:26:48 crc kubenswrapper[5107]: I0126 00:26:48.806858 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-d2ls2" event={"ID":"1e19fa3e-ab08-4129-9714-1ba2e512aa68","Type":"ContainerStarted","Data":"e1c5c50b0c6a45cf60bb2ef03279196ccd4e3f34adaeb06c40db98cbb5a1531a"} Jan 26 00:26:48 crc kubenswrapper[5107]: I0126 00:26:48.810445 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-hd5l4" event={"ID":"10f941c4-c1f6-4bf3-9b29-581e7a206ef8","Type":"ContainerStarted","Data":"c6619d8cbb559edf343b90b21d37a4b0d3621923004a2f7ee666c5247172acd9"} Jan 26 00:26:48 crc kubenswrapper[5107]: I0126 00:26:48.813009 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-j6gmr" event={"ID":"6097a808-7fcb-4512-9054-3de1585157e7","Type":"ContainerStarted","Data":"3815c5c45b486645f9844b7f646c14d8ed074e5e525293af6f7e55ae614b81f5"} Jan 26 00:26:48 crc kubenswrapper[5107]: I0126 00:26:48.813424 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-j6gmr" Jan 26 00:26:48 crc kubenswrapper[5107]: I0126 00:26:48.817640 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"e49e6c4b-b61e-40bf-8b52-2abf782b22df","Type":"ContainerStarted","Data":"7db11ea3515ec8912923cbef142405d3ba361daf97be5754d64a557d24dac88d"} Jan 26 00:26:48 crc kubenswrapper[5107]: I0126 00:26:48.891477 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-d2ls2" podStartSLOduration=7.177413781 podStartE2EDuration="38.891439309s" podCreationTimestamp="2026-01-26 00:26:10 +0000 UTC" firstStartedPulling="2026-01-26 00:26:16.271571424 +0000 UTC m=+1021.189165780" lastFinishedPulling="2026-01-26 00:26:47.985596962 +0000 UTC m=+1052.903191308" observedRunningTime="2026-01-26 00:26:48.847235199 +0000 UTC m=+1053.764829545" watchObservedRunningTime="2026-01-26 00:26:48.891439309 +0000 UTC m=+1053.809033655" Jan 26 00:26:48 crc kubenswrapper[5107]: I0126 00:26:48.969686 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=51.264057246 podStartE2EDuration="1m9.969656174s" podCreationTimestamp="2026-01-26 00:25:39 +0000 UTC" firstStartedPulling="2026-01-26 00:25:56.930151277 +0000 UTC m=+1001.847745623" lastFinishedPulling="2026-01-26 00:26:15.635750205 +0000 UTC m=+1020.553344551" observedRunningTime="2026-01-26 00:26:48.96125521 +0000 UTC m=+1053.878849556" watchObservedRunningTime="2026-01-26 00:26:48.969656174 +0000 UTC m=+1053.887250520" Jan 26 00:26:48 crc kubenswrapper[5107]: I0126 00:26:48.972488 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-j6gmr" podStartSLOduration=12.587720397 podStartE2EDuration="44.972468575s" podCreationTimestamp="2026-01-26 00:26:04 +0000 UTC" firstStartedPulling="2026-01-26 00:26:15.564643866 +0000 UTC m=+1020.482238212" lastFinishedPulling="2026-01-26 00:26:47.949392044 +0000 UTC m=+1052.866986390" observedRunningTime="2026-01-26 00:26:48.889355619 +0000 UTC m=+1053.806949965" watchObservedRunningTime="2026-01-26 00:26:48.972468575 +0000 UTC m=+1053.890062921" Jan 26 00:26:49 crc kubenswrapper[5107]: I0126 00:26:49.002758 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-hd5l4" podStartSLOduration=11.684959753 podStartE2EDuration="44.002717841s" podCreationTimestamp="2026-01-26 00:26:05 +0000 UTC" firstStartedPulling="2026-01-26 00:26:15.632271774 +0000 UTC m=+1020.549866120" lastFinishedPulling="2026-01-26 00:26:47.950029862 +0000 UTC m=+1052.867624208" observedRunningTime="2026-01-26 00:26:48.99300271 +0000 UTC m=+1053.910597046" watchObservedRunningTime="2026-01-26 00:26:49.002717841 +0000 UTC m=+1053.920312187" Jan 26 00:26:49 crc kubenswrapper[5107]: I0126 00:26:49.298085 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:26:54 crc kubenswrapper[5107]: I0126 00:26:54.831802 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-j6gmr" Jan 26 00:27:11 crc kubenswrapper[5107]: I0126 00:27:11.362591 5107 scope.go:117] "RemoveContainer" containerID="465f540c310b393f6c1e985b7598732c9afc00a55e34d937da3aec18535e59db" Jan 26 00:27:30 crc kubenswrapper[5107]: I0126 00:27:30.724001 5107 patch_prober.go:28] interesting pod/machine-config-daemon-94c4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:27:30 crc kubenswrapper[5107]: I0126 00:27:30.724773 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:28:00 crc kubenswrapper[5107]: I0126 00:28:00.143621 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489788-w92s7"] Jan 26 00:28:00 crc kubenswrapper[5107]: I0126 00:28:00.145228 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9f5d729a-317b-4713-a3c3-c5cc309edc5e" containerName="oc" Jan 26 00:28:00 crc kubenswrapper[5107]: I0126 00:28:00.145248 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f5d729a-317b-4713-a3c3-c5cc309edc5e" containerName="oc" Jan 26 00:28:00 crc kubenswrapper[5107]: I0126 00:28:00.145272 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7f312ff2-e1e9-468a-8fa2-baf40b465121" containerName="manage-dockerfile" Jan 26 00:28:00 crc kubenswrapper[5107]: I0126 00:28:00.145278 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f312ff2-e1e9-468a-8fa2-baf40b465121" containerName="manage-dockerfile" Jan 26 00:28:00 crc kubenswrapper[5107]: I0126 00:28:00.145416 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="9f5d729a-317b-4713-a3c3-c5cc309edc5e" containerName="oc" Jan 26 00:28:00 crc kubenswrapper[5107]: I0126 00:28:00.145431 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="7f312ff2-e1e9-468a-8fa2-baf40b465121" containerName="manage-dockerfile" Jan 26 00:28:00 crc kubenswrapper[5107]: I0126 00:28:00.723796 5107 patch_prober.go:28] interesting pod/machine-config-daemon-94c4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:28:00 crc kubenswrapper[5107]: I0126 00:28:00.724286 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:28:00 crc kubenswrapper[5107]: I0126 00:28:00.833269 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489788-w92s7"] Jan 26 00:28:00 crc kubenswrapper[5107]: I0126 00:28:00.833481 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489788-w92s7" Jan 26 00:28:00 crc kubenswrapper[5107]: I0126 00:28:00.836792 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:28:00 crc kubenswrapper[5107]: I0126 00:28:00.837321 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-96gbq\"" Jan 26 00:28:00 crc kubenswrapper[5107]: I0126 00:28:00.837857 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:28:00 crc kubenswrapper[5107]: I0126 00:28:00.899939 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf75c\" (UniqueName: \"kubernetes.io/projected/85c372b6-a6e9-46ec-b505-b7052e081793-kube-api-access-sf75c\") pod \"auto-csr-approver-29489788-w92s7\" (UID: \"85c372b6-a6e9-46ec-b505-b7052e081793\") " pod="openshift-infra/auto-csr-approver-29489788-w92s7" Jan 26 00:28:01 crc kubenswrapper[5107]: I0126 00:28:01.002695 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sf75c\" (UniqueName: \"kubernetes.io/projected/85c372b6-a6e9-46ec-b505-b7052e081793-kube-api-access-sf75c\") pod \"auto-csr-approver-29489788-w92s7\" (UID: \"85c372b6-a6e9-46ec-b505-b7052e081793\") " pod="openshift-infra/auto-csr-approver-29489788-w92s7" Jan 26 00:28:01 crc kubenswrapper[5107]: I0126 00:28:01.025491 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf75c\" (UniqueName: \"kubernetes.io/projected/85c372b6-a6e9-46ec-b505-b7052e081793-kube-api-access-sf75c\") pod \"auto-csr-approver-29489788-w92s7\" (UID: \"85c372b6-a6e9-46ec-b505-b7052e081793\") " pod="openshift-infra/auto-csr-approver-29489788-w92s7" Jan 26 00:28:01 crc kubenswrapper[5107]: I0126 00:28:01.163146 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489788-w92s7" Jan 26 00:28:01 crc kubenswrapper[5107]: I0126 00:28:01.489157 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489788-w92s7"] Jan 26 00:28:01 crc kubenswrapper[5107]: I0126 00:28:01.676224 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489788-w92s7" event={"ID":"85c372b6-a6e9-46ec-b505-b7052e081793","Type":"ContainerStarted","Data":"408bd14623f1f30b94c8fabc3ef24206672359a176c3de5cf72c3e73795bf11d"} Jan 26 00:28:04 crc kubenswrapper[5107]: I0126 00:28:04.709470 5107 generic.go:358] "Generic (PLEG): container finished" podID="85c372b6-a6e9-46ec-b505-b7052e081793" containerID="fb7fcdf7e8d0844060b5a3803d6d56d1d2efb8aebd463e7b78df930ae05bac0c" exitCode=0 Jan 26 00:28:04 crc kubenswrapper[5107]: I0126 00:28:04.709624 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489788-w92s7" event={"ID":"85c372b6-a6e9-46ec-b505-b7052e081793","Type":"ContainerDied","Data":"fb7fcdf7e8d0844060b5a3803d6d56d1d2efb8aebd463e7b78df930ae05bac0c"} Jan 26 00:28:05 crc kubenswrapper[5107]: I0126 00:28:05.955914 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489788-w92s7" Jan 26 00:28:06 crc kubenswrapper[5107]: I0126 00:28:06.067122 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf75c\" (UniqueName: \"kubernetes.io/projected/85c372b6-a6e9-46ec-b505-b7052e081793-kube-api-access-sf75c\") pod \"85c372b6-a6e9-46ec-b505-b7052e081793\" (UID: \"85c372b6-a6e9-46ec-b505-b7052e081793\") " Jan 26 00:28:06 crc kubenswrapper[5107]: I0126 00:28:06.075660 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85c372b6-a6e9-46ec-b505-b7052e081793-kube-api-access-sf75c" (OuterVolumeSpecName: "kube-api-access-sf75c") pod "85c372b6-a6e9-46ec-b505-b7052e081793" (UID: "85c372b6-a6e9-46ec-b505-b7052e081793"). InnerVolumeSpecName "kube-api-access-sf75c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:28:06 crc kubenswrapper[5107]: I0126 00:28:06.169770 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sf75c\" (UniqueName: \"kubernetes.io/projected/85c372b6-a6e9-46ec-b505-b7052e081793-kube-api-access-sf75c\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:06 crc kubenswrapper[5107]: I0126 00:28:06.728216 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489788-w92s7" Jan 26 00:28:06 crc kubenswrapper[5107]: I0126 00:28:06.728220 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489788-w92s7" event={"ID":"85c372b6-a6e9-46ec-b505-b7052e081793","Type":"ContainerDied","Data":"408bd14623f1f30b94c8fabc3ef24206672359a176c3de5cf72c3e73795bf11d"} Jan 26 00:28:06 crc kubenswrapper[5107]: I0126 00:28:06.728532 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="408bd14623f1f30b94c8fabc3ef24206672359a176c3de5cf72c3e73795bf11d" Jan 26 00:28:07 crc kubenswrapper[5107]: I0126 00:28:07.080620 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489782-nmsvm"] Jan 26 00:28:07 crc kubenswrapper[5107]: I0126 00:28:07.087957 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489782-nmsvm"] Jan 26 00:28:08 crc kubenswrapper[5107]: I0126 00:28:08.123169 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4a83ec9-dea0-40e1-ba37-4eb4e2edb9ec" path="/var/lib/kubelet/pods/a4a83ec9-dea0-40e1-ba37-4eb4e2edb9ec/volumes" Jan 26 00:28:11 crc kubenswrapper[5107]: I0126 00:28:11.532578 5107 scope.go:117] "RemoveContainer" containerID="1475fcefbeb042c17674098c08eba950b93da7effbc00aaeaf7f6a7b87cff919" Jan 26 00:28:30 crc kubenswrapper[5107]: I0126 00:28:30.723967 5107 patch_prober.go:28] interesting pod/machine-config-daemon-94c4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:28:30 crc kubenswrapper[5107]: I0126 00:28:30.724701 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:28:30 crc kubenswrapper[5107]: I0126 00:28:30.724778 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" Jan 26 00:28:30 crc kubenswrapper[5107]: I0126 00:28:30.725541 5107 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e28c2aa1f735d66e536651d9e0f8d196d2dccaf318caefe5b09e5743bda32586"} pod="openshift-machine-config-operator/machine-config-daemon-94c4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:28:30 crc kubenswrapper[5107]: I0126 00:28:30.725632 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" containerID="cri-o://e28c2aa1f735d66e536651d9e0f8d196d2dccaf318caefe5b09e5743bda32586" gracePeriod=600 Jan 26 00:28:31 crc kubenswrapper[5107]: I0126 00:28:31.935958 5107 generic.go:358] "Generic (PLEG): container finished" podID="7d907601-1852-43f9-8a70-ef4e71351e81" containerID="e28c2aa1f735d66e536651d9e0f8d196d2dccaf318caefe5b09e5743bda32586" exitCode=0 Jan 26 00:28:31 crc kubenswrapper[5107]: I0126 00:28:31.936078 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" event={"ID":"7d907601-1852-43f9-8a70-ef4e71351e81","Type":"ContainerDied","Data":"e28c2aa1f735d66e536651d9e0f8d196d2dccaf318caefe5b09e5743bda32586"} Jan 26 00:28:31 crc kubenswrapper[5107]: I0126 00:28:31.936625 5107 scope.go:117] "RemoveContainer" containerID="ed7fa55042f2cc4045dc49359ff131078dd30efec1ec5c7e0bdd12d2f213019e" Jan 26 00:28:35 crc kubenswrapper[5107]: I0126 00:28:35.976670 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" event={"ID":"7d907601-1852-43f9-8a70-ef4e71351e81","Type":"ContainerStarted","Data":"482044e2b3d805fd888f02ddc223f22c33448ddc500cab5ae44472e3724cc425"} Jan 26 00:28:49 crc kubenswrapper[5107]: I0126 00:28:49.088035 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_e49e6c4b-b61e-40bf-8b52-2abf782b22df/docker-build/0.log" Jan 26 00:28:49 crc kubenswrapper[5107]: I0126 00:28:49.089974 5107 generic.go:358] "Generic (PLEG): container finished" podID="e49e6c4b-b61e-40bf-8b52-2abf782b22df" containerID="7db11ea3515ec8912923cbef142405d3ba361daf97be5754d64a557d24dac88d" exitCode=1 Jan 26 00:28:49 crc kubenswrapper[5107]: I0126 00:28:49.090083 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"e49e6c4b-b61e-40bf-8b52-2abf782b22df","Type":"ContainerDied","Data":"7db11ea3515ec8912923cbef142405d3ba361daf97be5754d64a557d24dac88d"} Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.369457 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_e49e6c4b-b61e-40bf-8b52-2abf782b22df/docker-build/0.log" Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.371227 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.545668 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-x24l2-pull\" (UniqueName: \"kubernetes.io/secret/e49e6c4b-b61e-40bf-8b52-2abf782b22df-builder-dockercfg-x24l2-pull\") pod \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.545756 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-x24l2-push\" (UniqueName: \"kubernetes.io/secret/e49e6c4b-b61e-40bf-8b52-2abf782b22df-builder-dockercfg-x24l2-push\") pod \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.545808 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e49e6c4b-b61e-40bf-8b52-2abf782b22df-build-system-configs\") pod \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.545838 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e49e6c4b-b61e-40bf-8b52-2abf782b22df-container-storage-run\") pod \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.545919 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e49e6c4b-b61e-40bf-8b52-2abf782b22df-build-proxy-ca-bundles\") pod \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.546263 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e49e6c4b-b61e-40bf-8b52-2abf782b22df-build-ca-bundles\") pod \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.546394 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e49e6c4b-b61e-40bf-8b52-2abf782b22df-container-storage-root\") pod \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.546471 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e49e6c4b-b61e-40bf-8b52-2abf782b22df-node-pullsecrets\") pod \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.546633 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8527d\" (UniqueName: \"kubernetes.io/projected/e49e6c4b-b61e-40bf-8b52-2abf782b22df-kube-api-access-8527d\") pod \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.546683 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e49e6c4b-b61e-40bf-8b52-2abf782b22df-buildcachedir\") pod \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.546715 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e49e6c4b-b61e-40bf-8b52-2abf782b22df-build-blob-cache\") pod \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.546840 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e49e6c4b-b61e-40bf-8b52-2abf782b22df-buildworkdir\") pod \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\" (UID: \"e49e6c4b-b61e-40bf-8b52-2abf782b22df\") " Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.547079 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e49e6c4b-b61e-40bf-8b52-2abf782b22df-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "e49e6c4b-b61e-40bf-8b52-2abf782b22df" (UID: "e49e6c4b-b61e-40bf-8b52-2abf782b22df"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.547059 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e49e6c4b-b61e-40bf-8b52-2abf782b22df-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "e49e6c4b-b61e-40bf-8b52-2abf782b22df" (UID: "e49e6c4b-b61e-40bf-8b52-2abf782b22df"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.547190 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e49e6c4b-b61e-40bf-8b52-2abf782b22df-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "e49e6c4b-b61e-40bf-8b52-2abf782b22df" (UID: "e49e6c4b-b61e-40bf-8b52-2abf782b22df"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.547478 5107 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e49e6c4b-b61e-40bf-8b52-2abf782b22df-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.547510 5107 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e49e6c4b-b61e-40bf-8b52-2abf782b22df-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.547525 5107 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e49e6c4b-b61e-40bf-8b52-2abf782b22df-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.547717 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e49e6c4b-b61e-40bf-8b52-2abf782b22df-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "e49e6c4b-b61e-40bf-8b52-2abf782b22df" (UID: "e49e6c4b-b61e-40bf-8b52-2abf782b22df"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.547814 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e49e6c4b-b61e-40bf-8b52-2abf782b22df-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "e49e6c4b-b61e-40bf-8b52-2abf782b22df" (UID: "e49e6c4b-b61e-40bf-8b52-2abf782b22df"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.548231 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e49e6c4b-b61e-40bf-8b52-2abf782b22df-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "e49e6c4b-b61e-40bf-8b52-2abf782b22df" (UID: "e49e6c4b-b61e-40bf-8b52-2abf782b22df"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.565087 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e49e6c4b-b61e-40bf-8b52-2abf782b22df-builder-dockercfg-x24l2-push" (OuterVolumeSpecName: "builder-dockercfg-x24l2-push") pod "e49e6c4b-b61e-40bf-8b52-2abf782b22df" (UID: "e49e6c4b-b61e-40bf-8b52-2abf782b22df"). InnerVolumeSpecName "builder-dockercfg-x24l2-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.565221 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e49e6c4b-b61e-40bf-8b52-2abf782b22df-builder-dockercfg-x24l2-pull" (OuterVolumeSpecName: "builder-dockercfg-x24l2-pull") pod "e49e6c4b-b61e-40bf-8b52-2abf782b22df" (UID: "e49e6c4b-b61e-40bf-8b52-2abf782b22df"). InnerVolumeSpecName "builder-dockercfg-x24l2-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.570801 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e49e6c4b-b61e-40bf-8b52-2abf782b22df-kube-api-access-8527d" (OuterVolumeSpecName: "kube-api-access-8527d") pod "e49e6c4b-b61e-40bf-8b52-2abf782b22df" (UID: "e49e6c4b-b61e-40bf-8b52-2abf782b22df"). InnerVolumeSpecName "kube-api-access-8527d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.586621 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e49e6c4b-b61e-40bf-8b52-2abf782b22df-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "e49e6c4b-b61e-40bf-8b52-2abf782b22df" (UID: "e49e6c4b-b61e-40bf-8b52-2abf782b22df"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.648799 5107 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e49e6c4b-b61e-40bf-8b52-2abf782b22df-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.648868 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8527d\" (UniqueName: \"kubernetes.io/projected/e49e6c4b-b61e-40bf-8b52-2abf782b22df-kube-api-access-8527d\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.648950 5107 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e49e6c4b-b61e-40bf-8b52-2abf782b22df-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.648964 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-x24l2-pull\" (UniqueName: \"kubernetes.io/secret/e49e6c4b-b61e-40bf-8b52-2abf782b22df-builder-dockercfg-x24l2-pull\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.648978 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-x24l2-push\" (UniqueName: \"kubernetes.io/secret/e49e6c4b-b61e-40bf-8b52-2abf782b22df-builder-dockercfg-x24l2-push\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.648990 5107 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e49e6c4b-b61e-40bf-8b52-2abf782b22df-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.649001 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e49e6c4b-b61e-40bf-8b52-2abf782b22df-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.768424 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e49e6c4b-b61e-40bf-8b52-2abf782b22df-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "e49e6c4b-b61e-40bf-8b52-2abf782b22df" (UID: "e49e6c4b-b61e-40bf-8b52-2abf782b22df"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:28:50 crc kubenswrapper[5107]: I0126 00:28:50.853135 5107 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e49e6c4b-b61e-40bf-8b52-2abf782b22df-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:51 crc kubenswrapper[5107]: I0126 00:28:51.106477 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_e49e6c4b-b61e-40bf-8b52-2abf782b22df/docker-build/0.log" Jan 26 00:28:51 crc kubenswrapper[5107]: I0126 00:28:51.107427 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:28:51 crc kubenswrapper[5107]: I0126 00:28:51.107523 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"e49e6c4b-b61e-40bf-8b52-2abf782b22df","Type":"ContainerDied","Data":"986f7f46c8a5b0aed0d8345aebad3e2e3c4d2ccae284dd952d2fbe551bb1ad73"} Jan 26 00:28:51 crc kubenswrapper[5107]: I0126 00:28:51.107654 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="986f7f46c8a5b0aed0d8345aebad3e2e3c4d2ccae284dd952d2fbe551bb1ad73" Jan 26 00:28:52 crc kubenswrapper[5107]: I0126 00:28:52.428774 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e49e6c4b-b61e-40bf-8b52-2abf782b22df-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "e49e6c4b-b61e-40bf-8b52-2abf782b22df" (UID: "e49e6c4b-b61e-40bf-8b52-2abf782b22df"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:28:52 crc kubenswrapper[5107]: I0126 00:28:52.480434 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e49e6c4b-b61e-40bf-8b52-2abf782b22df-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.268728 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.270646 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e49e6c4b-b61e-40bf-8b52-2abf782b22df" containerName="docker-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.270671 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="e49e6c4b-b61e-40bf-8b52-2abf782b22df" containerName="docker-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.270688 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e49e6c4b-b61e-40bf-8b52-2abf782b22df" containerName="git-clone" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.270696 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="e49e6c4b-b61e-40bf-8b52-2abf782b22df" containerName="git-clone" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.270707 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="85c372b6-a6e9-46ec-b505-b7052e081793" containerName="oc" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.270716 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="85c372b6-a6e9-46ec-b505-b7052e081793" containerName="oc" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.270737 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e49e6c4b-b61e-40bf-8b52-2abf782b22df" containerName="manage-dockerfile" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.270744 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="e49e6c4b-b61e-40bf-8b52-2abf782b22df" containerName="manage-dockerfile" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.271111 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="85c372b6-a6e9-46ec-b505-b7052e081793" containerName="oc" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.271136 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="e49e6c4b-b61e-40bf-8b52-2abf782b22df" containerName="docker-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.300471 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.300714 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.305025 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-global-ca\"" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.305437 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-sys-config\"" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.305656 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-x24l2\"" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.306846 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-ca\"" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.428441 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.428542 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.428592 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.428800 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-x24l2-pull\" (UniqueName: \"kubernetes.io/secret/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-builder-dockercfg-x24l2-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.429131 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.429182 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.429286 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.429394 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-x24l2-push\" (UniqueName: \"kubernetes.io/secret/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-builder-dockercfg-x24l2-push\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.429434 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.429476 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzf2j\" (UniqueName: \"kubernetes.io/projected/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-kube-api-access-qzf2j\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.429750 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.430213 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.532730 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.532838 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.532903 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-x24l2-push\" (UniqueName: \"kubernetes.io/secret/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-builder-dockercfg-x24l2-push\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.532937 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.532958 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qzf2j\" (UniqueName: \"kubernetes.io/projected/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-kube-api-access-qzf2j\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.533010 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.533035 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.533122 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.533181 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.533237 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.533297 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.533366 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-x24l2-pull\" (UniqueName: \"kubernetes.io/secret/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-builder-dockercfg-x24l2-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.533517 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.533556 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.533996 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.534836 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.535057 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.536262 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.536776 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.537048 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.537573 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.546182 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-x24l2-push\" (UniqueName: \"kubernetes.io/secret/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-builder-dockercfg-x24l2-push\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.548213 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-x24l2-pull\" (UniqueName: \"kubernetes.io/secret/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-builder-dockercfg-x24l2-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.558397 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzf2j\" (UniqueName: \"kubernetes.io/projected/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-kube-api-access-qzf2j\") pod \"service-telemetry-operator-3-build\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.626369 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:01 crc kubenswrapper[5107]: I0126 00:29:01.921812 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Jan 26 00:29:02 crc kubenswrapper[5107]: I0126 00:29:02.203046 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a","Type":"ContainerStarted","Data":"f590041ef41cb89a8815e3467dcd844d302cec651c1093a94c047813127bc917"} Jan 26 00:29:03 crc kubenswrapper[5107]: I0126 00:29:03.214122 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a","Type":"ContainerStarted","Data":"3491150d9ab6c76d99b42602cc367c30daf35f2c380a4d660ab4b44d63b6c65a"} Jan 26 00:29:11 crc kubenswrapper[5107]: I0126 00:29:11.292490 5107 generic.go:358] "Generic (PLEG): container finished" podID="56677a28-74b1-42c7-a42b-1aaf1ebcdc8a" containerID="3491150d9ab6c76d99b42602cc367c30daf35f2c380a4d660ab4b44d63b6c65a" exitCode=0 Jan 26 00:29:11 crc kubenswrapper[5107]: I0126 00:29:11.292687 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a","Type":"ContainerDied","Data":"3491150d9ab6c76d99b42602cc367c30daf35f2c380a4d660ab4b44d63b6c65a"} Jan 26 00:29:12 crc kubenswrapper[5107]: I0126 00:29:12.303392 5107 generic.go:358] "Generic (PLEG): container finished" podID="56677a28-74b1-42c7-a42b-1aaf1ebcdc8a" containerID="fda9833c8a68f16e1478f1b62a839cd2d95254ba402eb289e63ef6f68fbbf515" exitCode=0 Jan 26 00:29:12 crc kubenswrapper[5107]: I0126 00:29:12.303493 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a","Type":"ContainerDied","Data":"fda9833c8a68f16e1478f1b62a839cd2d95254ba402eb289e63ef6f68fbbf515"} Jan 26 00:29:12 crc kubenswrapper[5107]: I0126 00:29:12.342774 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_56677a28-74b1-42c7-a42b-1aaf1ebcdc8a/manage-dockerfile/0.log" Jan 26 00:29:13 crc kubenswrapper[5107]: I0126 00:29:13.314536 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a","Type":"ContainerStarted","Data":"665ee989e28a4ae6a1e09fd50cf0247f8826d305bce0aad114b6d8294f728968"} Jan 26 00:29:13 crc kubenswrapper[5107]: I0126 00:29:13.358985 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-3-build" podStartSLOduration=12.358959113 podStartE2EDuration="12.358959113s" podCreationTimestamp="2026-01-26 00:29:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:29:13.35641841 +0000 UTC m=+1198.274012766" watchObservedRunningTime="2026-01-26 00:29:13.358959113 +0000 UTC m=+1198.276553459" Jan 26 00:29:17 crc kubenswrapper[5107]: I0126 00:29:17.123878 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_e49e6c4b-b61e-40bf-8b52-2abf782b22df/docker-build/0.log" Jan 26 00:29:17 crc kubenswrapper[5107]: I0126 00:29:17.132358 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_e49e6c4b-b61e-40bf-8b52-2abf782b22df/docker-build/0.log" Jan 26 00:29:17 crc kubenswrapper[5107]: I0126 00:29:17.189585 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f2mpq_2e5342d5-2d0c-458d-94b7-25c802ce298a/kube-multus/0.log" Jan 26 00:29:17 crc kubenswrapper[5107]: I0126 00:29:17.193799 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f2mpq_2e5342d5-2d0c-458d-94b7-25c802ce298a/kube-multus/0.log" Jan 26 00:29:17 crc kubenswrapper[5107]: I0126 00:29:17.195499 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Jan 26 00:29:17 crc kubenswrapper[5107]: I0126 00:29:17.199191 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Jan 26 00:29:17 crc kubenswrapper[5107]: I0126 00:29:17.204187 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 26 00:29:17 crc kubenswrapper[5107]: I0126 00:29:17.206647 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 26 00:30:00 crc kubenswrapper[5107]: I0126 00:30:00.161064 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489790-s6zbp"] Jan 26 00:30:03 crc kubenswrapper[5107]: I0126 00:30:03.871786 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489790-bl9sm"] Jan 26 00:30:03 crc kubenswrapper[5107]: I0126 00:30:03.872035 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489790-s6zbp" Jan 26 00:30:03 crc kubenswrapper[5107]: I0126 00:30:03.877501 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-96gbq\"" Jan 26 00:30:03 crc kubenswrapper[5107]: I0126 00:30:03.877530 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:30:03 crc kubenswrapper[5107]: I0126 00:30:03.877603 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:30:03 crc kubenswrapper[5107]: I0126 00:30:03.899243 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-bl9sm" Jan 26 00:30:03 crc kubenswrapper[5107]: I0126 00:30:03.902754 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 26 00:30:03 crc kubenswrapper[5107]: I0126 00:30:03.902800 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 26 00:30:03 crc kubenswrapper[5107]: I0126 00:30:03.913503 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489790-s6zbp"] Jan 26 00:30:03 crc kubenswrapper[5107]: I0126 00:30:03.913552 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489790-bl9sm"] Jan 26 00:30:04 crc kubenswrapper[5107]: I0126 00:30:04.019228 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4km7\" (UniqueName: \"kubernetes.io/projected/af5990d6-d41c-4b26-8ee5-bf59f03b20e3-kube-api-access-r4km7\") pod \"collect-profiles-29489790-bl9sm\" (UID: \"af5990d6-d41c-4b26-8ee5-bf59f03b20e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-bl9sm" Jan 26 00:30:04 crc kubenswrapper[5107]: I0126 00:30:04.019824 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh4c4\" (UniqueName: \"kubernetes.io/projected/3623094c-365d-4e73-a6e4-f7a89846508b-kube-api-access-zh4c4\") pod \"auto-csr-approver-29489790-s6zbp\" (UID: \"3623094c-365d-4e73-a6e4-f7a89846508b\") " pod="openshift-infra/auto-csr-approver-29489790-s6zbp" Jan 26 00:30:04 crc kubenswrapper[5107]: I0126 00:30:04.019905 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af5990d6-d41c-4b26-8ee5-bf59f03b20e3-config-volume\") pod \"collect-profiles-29489790-bl9sm\" (UID: \"af5990d6-d41c-4b26-8ee5-bf59f03b20e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-bl9sm" Jan 26 00:30:04 crc kubenswrapper[5107]: I0126 00:30:04.019940 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/af5990d6-d41c-4b26-8ee5-bf59f03b20e3-secret-volume\") pod \"collect-profiles-29489790-bl9sm\" (UID: \"af5990d6-d41c-4b26-8ee5-bf59f03b20e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-bl9sm" Jan 26 00:30:04 crc kubenswrapper[5107]: I0126 00:30:04.121627 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r4km7\" (UniqueName: \"kubernetes.io/projected/af5990d6-d41c-4b26-8ee5-bf59f03b20e3-kube-api-access-r4km7\") pod \"collect-profiles-29489790-bl9sm\" (UID: \"af5990d6-d41c-4b26-8ee5-bf59f03b20e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-bl9sm" Jan 26 00:30:04 crc kubenswrapper[5107]: I0126 00:30:04.121736 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zh4c4\" (UniqueName: \"kubernetes.io/projected/3623094c-365d-4e73-a6e4-f7a89846508b-kube-api-access-zh4c4\") pod \"auto-csr-approver-29489790-s6zbp\" (UID: \"3623094c-365d-4e73-a6e4-f7a89846508b\") " pod="openshift-infra/auto-csr-approver-29489790-s6zbp" Jan 26 00:30:04 crc kubenswrapper[5107]: I0126 00:30:04.121781 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af5990d6-d41c-4b26-8ee5-bf59f03b20e3-config-volume\") pod \"collect-profiles-29489790-bl9sm\" (UID: \"af5990d6-d41c-4b26-8ee5-bf59f03b20e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-bl9sm" Jan 26 00:30:04 crc kubenswrapper[5107]: I0126 00:30:04.121809 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/af5990d6-d41c-4b26-8ee5-bf59f03b20e3-secret-volume\") pod \"collect-profiles-29489790-bl9sm\" (UID: \"af5990d6-d41c-4b26-8ee5-bf59f03b20e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-bl9sm" Jan 26 00:30:04 crc kubenswrapper[5107]: I0126 00:30:04.124013 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af5990d6-d41c-4b26-8ee5-bf59f03b20e3-config-volume\") pod \"collect-profiles-29489790-bl9sm\" (UID: \"af5990d6-d41c-4b26-8ee5-bf59f03b20e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-bl9sm" Jan 26 00:30:04 crc kubenswrapper[5107]: I0126 00:30:04.142404 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh4c4\" (UniqueName: \"kubernetes.io/projected/3623094c-365d-4e73-a6e4-f7a89846508b-kube-api-access-zh4c4\") pod \"auto-csr-approver-29489790-s6zbp\" (UID: \"3623094c-365d-4e73-a6e4-f7a89846508b\") " pod="openshift-infra/auto-csr-approver-29489790-s6zbp" Jan 26 00:30:04 crc kubenswrapper[5107]: I0126 00:30:04.145261 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4km7\" (UniqueName: \"kubernetes.io/projected/af5990d6-d41c-4b26-8ee5-bf59f03b20e3-kube-api-access-r4km7\") pod \"collect-profiles-29489790-bl9sm\" (UID: \"af5990d6-d41c-4b26-8ee5-bf59f03b20e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-bl9sm" Jan 26 00:30:04 crc kubenswrapper[5107]: I0126 00:30:04.145970 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/af5990d6-d41c-4b26-8ee5-bf59f03b20e3-secret-volume\") pod \"collect-profiles-29489790-bl9sm\" (UID: \"af5990d6-d41c-4b26-8ee5-bf59f03b20e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-bl9sm" Jan 26 00:30:04 crc kubenswrapper[5107]: I0126 00:30:04.193041 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489790-s6zbp" Jan 26 00:30:04 crc kubenswrapper[5107]: I0126 00:30:04.223199 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-bl9sm" Jan 26 00:30:04 crc kubenswrapper[5107]: I0126 00:30:04.544460 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489790-bl9sm"] Jan 26 00:30:04 crc kubenswrapper[5107]: I0126 00:30:04.706533 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489790-s6zbp"] Jan 26 00:30:04 crc kubenswrapper[5107]: W0126 00:30:04.715531 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3623094c_365d_4e73_a6e4_f7a89846508b.slice/crio-9da4cd62c1688e00b237ed99929d1ce84deea221966b288851dc59fc1ba4713e WatchSource:0}: Error finding container 9da4cd62c1688e00b237ed99929d1ce84deea221966b288851dc59fc1ba4713e: Status 404 returned error can't find the container with id 9da4cd62c1688e00b237ed99929d1ce84deea221966b288851dc59fc1ba4713e Jan 26 00:30:04 crc kubenswrapper[5107]: I0126 00:30:04.727861 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489790-s6zbp" event={"ID":"3623094c-365d-4e73-a6e4-f7a89846508b","Type":"ContainerStarted","Data":"9da4cd62c1688e00b237ed99929d1ce84deea221966b288851dc59fc1ba4713e"} Jan 26 00:30:04 crc kubenswrapper[5107]: I0126 00:30:04.731684 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-bl9sm" event={"ID":"af5990d6-d41c-4b26-8ee5-bf59f03b20e3","Type":"ContainerStarted","Data":"68a180e6d81e6269c334c343e88804ef44837f7b085fb976b673598c224809de"} Jan 26 00:30:05 crc kubenswrapper[5107]: I0126 00:30:05.745561 5107 generic.go:358] "Generic (PLEG): container finished" podID="af5990d6-d41c-4b26-8ee5-bf59f03b20e3" containerID="51ca94759a53bc9822489c0df710b8b95e4f30058cca46ad32ddf30d9c8cdca3" exitCode=0 Jan 26 00:30:05 crc kubenswrapper[5107]: I0126 00:30:05.746164 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-bl9sm" event={"ID":"af5990d6-d41c-4b26-8ee5-bf59f03b20e3","Type":"ContainerDied","Data":"51ca94759a53bc9822489c0df710b8b95e4f30058cca46ad32ddf30d9c8cdca3"} Jan 26 00:30:06 crc kubenswrapper[5107]: I0126 00:30:06.757668 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489790-s6zbp" event={"ID":"3623094c-365d-4e73-a6e4-f7a89846508b","Type":"ContainerStarted","Data":"fab8fbe342061c5cbf4058d30d524bbefc446e439009a7b3669fceb29ac3f57d"} Jan 26 00:30:06 crc kubenswrapper[5107]: I0126 00:30:06.776955 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29489790-s6zbp" podStartSLOduration=5.148356471 podStartE2EDuration="6.776933519s" podCreationTimestamp="2026-01-26 00:30:00 +0000 UTC" firstStartedPulling="2026-01-26 00:30:04.717838412 +0000 UTC m=+1249.635432758" lastFinishedPulling="2026-01-26 00:30:06.34641546 +0000 UTC m=+1251.264009806" observedRunningTime="2026-01-26 00:30:06.775781616 +0000 UTC m=+1251.693375962" watchObservedRunningTime="2026-01-26 00:30:06.776933519 +0000 UTC m=+1251.694527865" Jan 26 00:30:07 crc kubenswrapper[5107]: I0126 00:30:07.002488 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-bl9sm" Jan 26 00:30:07 crc kubenswrapper[5107]: I0126 00:30:07.101267 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af5990d6-d41c-4b26-8ee5-bf59f03b20e3-config-volume\") pod \"af5990d6-d41c-4b26-8ee5-bf59f03b20e3\" (UID: \"af5990d6-d41c-4b26-8ee5-bf59f03b20e3\") " Jan 26 00:30:07 crc kubenswrapper[5107]: I0126 00:30:07.101470 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/af5990d6-d41c-4b26-8ee5-bf59f03b20e3-secret-volume\") pod \"af5990d6-d41c-4b26-8ee5-bf59f03b20e3\" (UID: \"af5990d6-d41c-4b26-8ee5-bf59f03b20e3\") " Jan 26 00:30:07 crc kubenswrapper[5107]: I0126 00:30:07.101600 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4km7\" (UniqueName: \"kubernetes.io/projected/af5990d6-d41c-4b26-8ee5-bf59f03b20e3-kube-api-access-r4km7\") pod \"af5990d6-d41c-4b26-8ee5-bf59f03b20e3\" (UID: \"af5990d6-d41c-4b26-8ee5-bf59f03b20e3\") " Jan 26 00:30:07 crc kubenswrapper[5107]: I0126 00:30:07.102590 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af5990d6-d41c-4b26-8ee5-bf59f03b20e3-config-volume" (OuterVolumeSpecName: "config-volume") pod "af5990d6-d41c-4b26-8ee5-bf59f03b20e3" (UID: "af5990d6-d41c-4b26-8ee5-bf59f03b20e3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:30:07 crc kubenswrapper[5107]: I0126 00:30:07.108406 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af5990d6-d41c-4b26-8ee5-bf59f03b20e3-kube-api-access-r4km7" (OuterVolumeSpecName: "kube-api-access-r4km7") pod "af5990d6-d41c-4b26-8ee5-bf59f03b20e3" (UID: "af5990d6-d41c-4b26-8ee5-bf59f03b20e3"). InnerVolumeSpecName "kube-api-access-r4km7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:30:07 crc kubenswrapper[5107]: I0126 00:30:07.109324 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af5990d6-d41c-4b26-8ee5-bf59f03b20e3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "af5990d6-d41c-4b26-8ee5-bf59f03b20e3" (UID: "af5990d6-d41c-4b26-8ee5-bf59f03b20e3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:30:07 crc kubenswrapper[5107]: I0126 00:30:07.203483 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r4km7\" (UniqueName: \"kubernetes.io/projected/af5990d6-d41c-4b26-8ee5-bf59f03b20e3-kube-api-access-r4km7\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:07 crc kubenswrapper[5107]: I0126 00:30:07.203837 5107 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af5990d6-d41c-4b26-8ee5-bf59f03b20e3-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:07 crc kubenswrapper[5107]: I0126 00:30:07.203954 5107 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/af5990d6-d41c-4b26-8ee5-bf59f03b20e3-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:07 crc kubenswrapper[5107]: I0126 00:30:07.767263 5107 generic.go:358] "Generic (PLEG): container finished" podID="3623094c-365d-4e73-a6e4-f7a89846508b" containerID="fab8fbe342061c5cbf4058d30d524bbefc446e439009a7b3669fceb29ac3f57d" exitCode=0 Jan 26 00:30:07 crc kubenswrapper[5107]: I0126 00:30:07.767374 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489790-s6zbp" event={"ID":"3623094c-365d-4e73-a6e4-f7a89846508b","Type":"ContainerDied","Data":"fab8fbe342061c5cbf4058d30d524bbefc446e439009a7b3669fceb29ac3f57d"} Jan 26 00:30:07 crc kubenswrapper[5107]: I0126 00:30:07.771008 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-bl9sm" event={"ID":"af5990d6-d41c-4b26-8ee5-bf59f03b20e3","Type":"ContainerDied","Data":"68a180e6d81e6269c334c343e88804ef44837f7b085fb976b673598c224809de"} Jan 26 00:30:07 crc kubenswrapper[5107]: I0126 00:30:07.771056 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68a180e6d81e6269c334c343e88804ef44837f7b085fb976b673598c224809de" Jan 26 00:30:07 crc kubenswrapper[5107]: I0126 00:30:07.771069 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-bl9sm" Jan 26 00:30:09 crc kubenswrapper[5107]: I0126 00:30:09.083461 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489790-s6zbp" Jan 26 00:30:09 crc kubenswrapper[5107]: I0126 00:30:09.143296 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh4c4\" (UniqueName: \"kubernetes.io/projected/3623094c-365d-4e73-a6e4-f7a89846508b-kube-api-access-zh4c4\") pod \"3623094c-365d-4e73-a6e4-f7a89846508b\" (UID: \"3623094c-365d-4e73-a6e4-f7a89846508b\") " Jan 26 00:30:09 crc kubenswrapper[5107]: I0126 00:30:09.149902 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3623094c-365d-4e73-a6e4-f7a89846508b-kube-api-access-zh4c4" (OuterVolumeSpecName: "kube-api-access-zh4c4") pod "3623094c-365d-4e73-a6e4-f7a89846508b" (UID: "3623094c-365d-4e73-a6e4-f7a89846508b"). InnerVolumeSpecName "kube-api-access-zh4c4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:30:09 crc kubenswrapper[5107]: I0126 00:30:09.225007 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489784-j7vms"] Jan 26 00:30:09 crc kubenswrapper[5107]: I0126 00:30:09.229794 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489784-j7vms"] Jan 26 00:30:09 crc kubenswrapper[5107]: I0126 00:30:09.245200 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zh4c4\" (UniqueName: \"kubernetes.io/projected/3623094c-365d-4e73-a6e4-f7a89846508b-kube-api-access-zh4c4\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:09 crc kubenswrapper[5107]: I0126 00:30:09.788776 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489790-s6zbp" Jan 26 00:30:09 crc kubenswrapper[5107]: I0126 00:30:09.788773 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489790-s6zbp" event={"ID":"3623094c-365d-4e73-a6e4-f7a89846508b","Type":"ContainerDied","Data":"9da4cd62c1688e00b237ed99929d1ce84deea221966b288851dc59fc1ba4713e"} Jan 26 00:30:09 crc kubenswrapper[5107]: I0126 00:30:09.790084 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9da4cd62c1688e00b237ed99929d1ce84deea221966b288851dc59fc1ba4713e" Jan 26 00:30:10 crc kubenswrapper[5107]: I0126 00:30:10.123833 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e0d47be-2e55-4a4f-8d6e-8b513823b753" path="/var/lib/kubelet/pods/1e0d47be-2e55-4a4f-8d6e-8b513823b753/volumes" Jan 26 00:30:11 crc kubenswrapper[5107]: I0126 00:30:11.720306 5107 scope.go:117] "RemoveContainer" containerID="ca2b3b8c02bb8fbf9781de2148957e38923de7866b7568ad785b010fe74a0187" Jan 26 00:30:35 crc kubenswrapper[5107]: I0126 00:30:35.159539 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_56677a28-74b1-42c7-a42b-1aaf1ebcdc8a/docker-build/0.log" Jan 26 00:30:35 crc kubenswrapper[5107]: I0126 00:30:35.162851 5107 generic.go:358] "Generic (PLEG): container finished" podID="56677a28-74b1-42c7-a42b-1aaf1ebcdc8a" containerID="665ee989e28a4ae6a1e09fd50cf0247f8826d305bce0aad114b6d8294f728968" exitCode=1 Jan 26 00:30:35 crc kubenswrapper[5107]: I0126 00:30:35.162930 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a","Type":"ContainerDied","Data":"665ee989e28a4ae6a1e09fd50cf0247f8826d305bce0aad114b6d8294f728968"} Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.452532 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_56677a28-74b1-42c7-a42b-1aaf1ebcdc8a/docker-build/0.log" Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.454745 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.559601 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-container-storage-root\") pod \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.559676 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-buildcachedir\") pod \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.559722 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-build-system-configs\") pod \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.559823 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzf2j\" (UniqueName: \"kubernetes.io/projected/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-kube-api-access-qzf2j\") pod \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.559876 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-x24l2-push\" (UniqueName: \"kubernetes.io/secret/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-builder-dockercfg-x24l2-push\") pod \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.559922 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-x24l2-pull\" (UniqueName: \"kubernetes.io/secret/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-builder-dockercfg-x24l2-pull\") pod \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.559921 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "56677a28-74b1-42c7-a42b-1aaf1ebcdc8a" (UID: "56677a28-74b1-42c7-a42b-1aaf1ebcdc8a"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.559957 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-buildworkdir\") pod \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.560119 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-build-proxy-ca-bundles\") pod \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.560199 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-build-blob-cache\") pod \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.560249 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-node-pullsecrets\") pod \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.560287 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-build-ca-bundles\") pod \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.560450 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-container-storage-run\") pod \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\" (UID: \"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a\") " Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.560433 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "56677a28-74b1-42c7-a42b-1aaf1ebcdc8a" (UID: "56677a28-74b1-42c7-a42b-1aaf1ebcdc8a"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.561142 5107 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.561171 5107 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.561769 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "56677a28-74b1-42c7-a42b-1aaf1ebcdc8a" (UID: "56677a28-74b1-42c7-a42b-1aaf1ebcdc8a"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.561848 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "56677a28-74b1-42c7-a42b-1aaf1ebcdc8a" (UID: "56677a28-74b1-42c7-a42b-1aaf1ebcdc8a"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.561831 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "56677a28-74b1-42c7-a42b-1aaf1ebcdc8a" (UID: "56677a28-74b1-42c7-a42b-1aaf1ebcdc8a"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.562270 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "56677a28-74b1-42c7-a42b-1aaf1ebcdc8a" (UID: "56677a28-74b1-42c7-a42b-1aaf1ebcdc8a"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.568425 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-kube-api-access-qzf2j" (OuterVolumeSpecName: "kube-api-access-qzf2j") pod "56677a28-74b1-42c7-a42b-1aaf1ebcdc8a" (UID: "56677a28-74b1-42c7-a42b-1aaf1ebcdc8a"). InnerVolumeSpecName "kube-api-access-qzf2j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.572793 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-builder-dockercfg-x24l2-pull" (OuterVolumeSpecName: "builder-dockercfg-x24l2-pull") pod "56677a28-74b1-42c7-a42b-1aaf1ebcdc8a" (UID: "56677a28-74b1-42c7-a42b-1aaf1ebcdc8a"). InnerVolumeSpecName "builder-dockercfg-x24l2-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.573054 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-builder-dockercfg-x24l2-push" (OuterVolumeSpecName: "builder-dockercfg-x24l2-push") pod "56677a28-74b1-42c7-a42b-1aaf1ebcdc8a" (UID: "56677a28-74b1-42c7-a42b-1aaf1ebcdc8a"). InnerVolumeSpecName "builder-dockercfg-x24l2-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.602602 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "56677a28-74b1-42c7-a42b-1aaf1ebcdc8a" (UID: "56677a28-74b1-42c7-a42b-1aaf1ebcdc8a"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.662444 5107 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.662508 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qzf2j\" (UniqueName: \"kubernetes.io/projected/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-kube-api-access-qzf2j\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.662522 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-x24l2-push\" (UniqueName: \"kubernetes.io/secret/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-builder-dockercfg-x24l2-push\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.662542 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-x24l2-pull\" (UniqueName: \"kubernetes.io/secret/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-builder-dockercfg-x24l2-pull\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.662556 5107 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.662596 5107 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.662609 5107 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:36 crc kubenswrapper[5107]: I0126 00:30:36.662621 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:37 crc kubenswrapper[5107]: I0126 00:30:37.185427 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_56677a28-74b1-42c7-a42b-1aaf1ebcdc8a/docker-build/0.log" Jan 26 00:30:37 crc kubenswrapper[5107]: I0126 00:30:37.186848 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"56677a28-74b1-42c7-a42b-1aaf1ebcdc8a","Type":"ContainerDied","Data":"f590041ef41cb89a8815e3467dcd844d302cec651c1093a94c047813127bc917"} Jan 26 00:30:37 crc kubenswrapper[5107]: I0126 00:30:37.186874 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:30:37 crc kubenswrapper[5107]: I0126 00:30:37.186938 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f590041ef41cb89a8815e3467dcd844d302cec651c1093a94c047813127bc917" Jan 26 00:30:37 crc kubenswrapper[5107]: I0126 00:30:37.244670 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "56677a28-74b1-42c7-a42b-1aaf1ebcdc8a" (UID: "56677a28-74b1-42c7-a42b-1aaf1ebcdc8a"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:30:37 crc kubenswrapper[5107]: I0126 00:30:37.273768 5107 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:38 crc kubenswrapper[5107]: I0126 00:30:38.510201 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "56677a28-74b1-42c7-a42b-1aaf1ebcdc8a" (UID: "56677a28-74b1-42c7-a42b-1aaf1ebcdc8a"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:30:38 crc kubenswrapper[5107]: I0126 00:30:38.595719 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/56677a28-74b1-42c7-a42b-1aaf1ebcdc8a-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.558561 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.560113 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="56677a28-74b1-42c7-a42b-1aaf1ebcdc8a" containerName="manage-dockerfile" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.560537 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="56677a28-74b1-42c7-a42b-1aaf1ebcdc8a" containerName="manage-dockerfile" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.560558 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3623094c-365d-4e73-a6e4-f7a89846508b" containerName="oc" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.560570 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3623094c-365d-4e73-a6e4-f7a89846508b" containerName="oc" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.560616 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af5990d6-d41c-4b26-8ee5-bf59f03b20e3" containerName="collect-profiles" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.560628 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="af5990d6-d41c-4b26-8ee5-bf59f03b20e3" containerName="collect-profiles" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.560644 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="56677a28-74b1-42c7-a42b-1aaf1ebcdc8a" containerName="git-clone" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.560652 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="56677a28-74b1-42c7-a42b-1aaf1ebcdc8a" containerName="git-clone" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.560693 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="56677a28-74b1-42c7-a42b-1aaf1ebcdc8a" containerName="docker-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.560701 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="56677a28-74b1-42c7-a42b-1aaf1ebcdc8a" containerName="docker-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.560939 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="56677a28-74b1-42c7-a42b-1aaf1ebcdc8a" containerName="docker-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.560963 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3623094c-365d-4e73-a6e4-f7a89846508b" containerName="oc" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.560978 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="af5990d6-d41c-4b26-8ee5-bf59f03b20e3" containerName="collect-profiles" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.567733 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.570754 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-sys-config\"" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.570924 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-global-ca\"" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.571034 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-ca\"" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.571438 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-x24l2\"" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.587810 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.648815 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6194cf20-381a-4030-a802-413bdf580aca-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.648908 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6194cf20-381a-4030-a802-413bdf580aca-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.648945 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6194cf20-381a-4030-a802-413bdf580aca-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.648966 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-x24l2-pull\" (UniqueName: \"kubernetes.io/secret/6194cf20-381a-4030-a802-413bdf580aca-builder-dockercfg-x24l2-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.649006 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6194cf20-381a-4030-a802-413bdf580aca-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.649115 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6194cf20-381a-4030-a802-413bdf580aca-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.649210 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6194cf20-381a-4030-a802-413bdf580aca-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.649356 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jcr8\" (UniqueName: \"kubernetes.io/projected/6194cf20-381a-4030-a802-413bdf580aca-kube-api-access-9jcr8\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.649454 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6194cf20-381a-4030-a802-413bdf580aca-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.649528 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-x24l2-push\" (UniqueName: \"kubernetes.io/secret/6194cf20-381a-4030-a802-413bdf580aca-builder-dockercfg-x24l2-push\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.649672 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6194cf20-381a-4030-a802-413bdf580aca-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.649796 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6194cf20-381a-4030-a802-413bdf580aca-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.751689 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6194cf20-381a-4030-a802-413bdf580aca-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.751773 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-x24l2-push\" (UniqueName: \"kubernetes.io/secret/6194cf20-381a-4030-a802-413bdf580aca-builder-dockercfg-x24l2-push\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.751841 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6194cf20-381a-4030-a802-413bdf580aca-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.751876 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6194cf20-381a-4030-a802-413bdf580aca-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.751933 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6194cf20-381a-4030-a802-413bdf580aca-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.751983 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6194cf20-381a-4030-a802-413bdf580aca-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.752021 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6194cf20-381a-4030-a802-413bdf580aca-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.752050 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-x24l2-pull\" (UniqueName: \"kubernetes.io/secret/6194cf20-381a-4030-a802-413bdf580aca-builder-dockercfg-x24l2-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.752077 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6194cf20-381a-4030-a802-413bdf580aca-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.752105 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6194cf20-381a-4030-a802-413bdf580aca-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.752149 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6194cf20-381a-4030-a802-413bdf580aca-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.752172 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6194cf20-381a-4030-a802-413bdf580aca-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.752211 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9jcr8\" (UniqueName: \"kubernetes.io/projected/6194cf20-381a-4030-a802-413bdf580aca-kube-api-access-9jcr8\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.752396 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6194cf20-381a-4030-a802-413bdf580aca-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.752632 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6194cf20-381a-4030-a802-413bdf580aca-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.752781 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6194cf20-381a-4030-a802-413bdf580aca-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.753024 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6194cf20-381a-4030-a802-413bdf580aca-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.753112 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6194cf20-381a-4030-a802-413bdf580aca-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.753325 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6194cf20-381a-4030-a802-413bdf580aca-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.753555 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6194cf20-381a-4030-a802-413bdf580aca-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.754354 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6194cf20-381a-4030-a802-413bdf580aca-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.760996 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-x24l2-push\" (UniqueName: \"kubernetes.io/secret/6194cf20-381a-4030-a802-413bdf580aca-builder-dockercfg-x24l2-push\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.766685 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-x24l2-pull\" (UniqueName: \"kubernetes.io/secret/6194cf20-381a-4030-a802-413bdf580aca-builder-dockercfg-x24l2-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:47 crc kubenswrapper[5107]: I0126 00:30:47.774953 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jcr8\" (UniqueName: \"kubernetes.io/projected/6194cf20-381a-4030-a802-413bdf580aca-kube-api-access-9jcr8\") pod \"service-telemetry-operator-4-build\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:48 crc kubenswrapper[5107]: I0126 00:30:47.998923 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:48 crc kubenswrapper[5107]: I0126 00:30:48.228011 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Jan 26 00:30:48 crc kubenswrapper[5107]: I0126 00:30:48.240939 5107 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 00:30:48 crc kubenswrapper[5107]: I0126 00:30:48.303893 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"6194cf20-381a-4030-a802-413bdf580aca","Type":"ContainerStarted","Data":"c2f500fe408e3b6cb4e1434a13650a99332e498ba9db524aa3880723c727cb84"} Jan 26 00:30:49 crc kubenswrapper[5107]: I0126 00:30:49.315786 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"6194cf20-381a-4030-a802-413bdf580aca","Type":"ContainerStarted","Data":"a1401788f58284a599bed017633174261d0cf431328eb0ec7d7ef71f642d552e"} Jan 26 00:30:58 crc kubenswrapper[5107]: I0126 00:30:58.450480 5107 generic.go:358] "Generic (PLEG): container finished" podID="6194cf20-381a-4030-a802-413bdf580aca" containerID="a1401788f58284a599bed017633174261d0cf431328eb0ec7d7ef71f642d552e" exitCode=0 Jan 26 00:30:58 crc kubenswrapper[5107]: I0126 00:30:58.450577 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"6194cf20-381a-4030-a802-413bdf580aca","Type":"ContainerDied","Data":"a1401788f58284a599bed017633174261d0cf431328eb0ec7d7ef71f642d552e"} Jan 26 00:30:59 crc kubenswrapper[5107]: I0126 00:30:59.462607 5107 generic.go:358] "Generic (PLEG): container finished" podID="6194cf20-381a-4030-a802-413bdf580aca" containerID="b6d5fd80d11a1c87042e2436093a0ee2372af0dd0652e0781d85c090f403eedb" exitCode=0 Jan 26 00:30:59 crc kubenswrapper[5107]: I0126 00:30:59.462831 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"6194cf20-381a-4030-a802-413bdf580aca","Type":"ContainerDied","Data":"b6d5fd80d11a1c87042e2436093a0ee2372af0dd0652e0781d85c090f403eedb"} Jan 26 00:30:59 crc kubenswrapper[5107]: I0126 00:30:59.512201 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_6194cf20-381a-4030-a802-413bdf580aca/manage-dockerfile/0.log" Jan 26 00:31:00 crc kubenswrapper[5107]: I0126 00:31:00.478157 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"6194cf20-381a-4030-a802-413bdf580aca","Type":"ContainerStarted","Data":"975ab1fa8970e5319e6e83feffe3a8c9cf327b4784fe39119c381d5655378fdf"} Jan 26 00:31:00 crc kubenswrapper[5107]: I0126 00:31:00.510405 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-4-build" podStartSLOduration=13.510379168 podStartE2EDuration="13.510379168s" podCreationTimestamp="2026-01-26 00:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:31:00.507506187 +0000 UTC m=+1305.425100533" watchObservedRunningTime="2026-01-26 00:31:00.510379168 +0000 UTC m=+1305.427973514" Jan 26 00:31:00 crc kubenswrapper[5107]: I0126 00:31:00.724296 5107 patch_prober.go:28] interesting pod/machine-config-daemon-94c4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:31:00 crc kubenswrapper[5107]: I0126 00:31:00.724398 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:31:30 crc kubenswrapper[5107]: I0126 00:31:30.723715 5107 patch_prober.go:28] interesting pod/machine-config-daemon-94c4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:31:30 crc kubenswrapper[5107]: I0126 00:31:30.724201 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:32:00 crc kubenswrapper[5107]: I0126 00:32:00.155626 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489792-9kxhk"] Jan 26 00:32:00 crc kubenswrapper[5107]: I0126 00:32:00.196237 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489792-9kxhk"] Jan 26 00:32:00 crc kubenswrapper[5107]: I0126 00:32:00.196537 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489792-9kxhk" Jan 26 00:32:00 crc kubenswrapper[5107]: I0126 00:32:00.200263 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:32:00 crc kubenswrapper[5107]: I0126 00:32:00.200604 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-96gbq\"" Jan 26 00:32:00 crc kubenswrapper[5107]: I0126 00:32:00.201713 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:32:00 crc kubenswrapper[5107]: I0126 00:32:00.323808 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l6sv\" (UniqueName: \"kubernetes.io/projected/261140c0-a32c-4656-914c-7b6c9f0c8968-kube-api-access-7l6sv\") pod \"auto-csr-approver-29489792-9kxhk\" (UID: \"261140c0-a32c-4656-914c-7b6c9f0c8968\") " pod="openshift-infra/auto-csr-approver-29489792-9kxhk" Jan 26 00:32:00 crc kubenswrapper[5107]: I0126 00:32:00.426196 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7l6sv\" (UniqueName: \"kubernetes.io/projected/261140c0-a32c-4656-914c-7b6c9f0c8968-kube-api-access-7l6sv\") pod \"auto-csr-approver-29489792-9kxhk\" (UID: \"261140c0-a32c-4656-914c-7b6c9f0c8968\") " pod="openshift-infra/auto-csr-approver-29489792-9kxhk" Jan 26 00:32:00 crc kubenswrapper[5107]: I0126 00:32:00.450848 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l6sv\" (UniqueName: \"kubernetes.io/projected/261140c0-a32c-4656-914c-7b6c9f0c8968-kube-api-access-7l6sv\") pod \"auto-csr-approver-29489792-9kxhk\" (UID: \"261140c0-a32c-4656-914c-7b6c9f0c8968\") " pod="openshift-infra/auto-csr-approver-29489792-9kxhk" Jan 26 00:32:00 crc kubenswrapper[5107]: I0126 00:32:00.524660 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489792-9kxhk" Jan 26 00:32:00 crc kubenswrapper[5107]: I0126 00:32:00.723825 5107 patch_prober.go:28] interesting pod/machine-config-daemon-94c4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:32:00 crc kubenswrapper[5107]: I0126 00:32:00.723940 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:32:00 crc kubenswrapper[5107]: I0126 00:32:00.723999 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" Jan 26 00:32:00 crc kubenswrapper[5107]: I0126 00:32:00.725196 5107 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"482044e2b3d805fd888f02ddc223f22c33448ddc500cab5ae44472e3724cc425"} pod="openshift-machine-config-operator/machine-config-daemon-94c4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:32:00 crc kubenswrapper[5107]: I0126 00:32:00.725271 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" containerID="cri-o://482044e2b3d805fd888f02ddc223f22c33448ddc500cab5ae44472e3724cc425" gracePeriod=600 Jan 26 00:32:00 crc kubenswrapper[5107]: I0126 00:32:00.785215 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489792-9kxhk"] Jan 26 00:32:00 crc kubenswrapper[5107]: I0126 00:32:00.986626 5107 generic.go:358] "Generic (PLEG): container finished" podID="7d907601-1852-43f9-8a70-ef4e71351e81" containerID="482044e2b3d805fd888f02ddc223f22c33448ddc500cab5ae44472e3724cc425" exitCode=0 Jan 26 00:32:00 crc kubenswrapper[5107]: I0126 00:32:00.986703 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" event={"ID":"7d907601-1852-43f9-8a70-ef4e71351e81","Type":"ContainerDied","Data":"482044e2b3d805fd888f02ddc223f22c33448ddc500cab5ae44472e3724cc425"} Jan 26 00:32:00 crc kubenswrapper[5107]: I0126 00:32:00.986762 5107 scope.go:117] "RemoveContainer" containerID="e28c2aa1f735d66e536651d9e0f8d196d2dccaf318caefe5b09e5743bda32586" Jan 26 00:32:00 crc kubenswrapper[5107]: I0126 00:32:00.988602 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489792-9kxhk" event={"ID":"261140c0-a32c-4656-914c-7b6c9f0c8968","Type":"ContainerStarted","Data":"b50e3aa08a87278c88cbd490e88ea3da3ccc3d09ebda1e246690145a0a2b1315"} Jan 26 00:32:02 crc kubenswrapper[5107]: I0126 00:32:01.999558 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" event={"ID":"7d907601-1852-43f9-8a70-ef4e71351e81","Type":"ContainerStarted","Data":"68a6136f0fb49dc0b05970bd45082071005debf7e7794eeb59067e6ae923b996"} Jan 26 00:32:03 crc kubenswrapper[5107]: I0126 00:32:03.010382 5107 generic.go:358] "Generic (PLEG): container finished" podID="261140c0-a32c-4656-914c-7b6c9f0c8968" containerID="c39cba9455b3ee0cdc317f027086cb8fe1123bcfb8b7290d40d1b59cbbb93a79" exitCode=0 Jan 26 00:32:03 crc kubenswrapper[5107]: I0126 00:32:03.010534 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489792-9kxhk" event={"ID":"261140c0-a32c-4656-914c-7b6c9f0c8968","Type":"ContainerDied","Data":"c39cba9455b3ee0cdc317f027086cb8fe1123bcfb8b7290d40d1b59cbbb93a79"} Jan 26 00:32:04 crc kubenswrapper[5107]: I0126 00:32:04.329285 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489792-9kxhk" Jan 26 00:32:04 crc kubenswrapper[5107]: I0126 00:32:04.505273 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l6sv\" (UniqueName: \"kubernetes.io/projected/261140c0-a32c-4656-914c-7b6c9f0c8968-kube-api-access-7l6sv\") pod \"261140c0-a32c-4656-914c-7b6c9f0c8968\" (UID: \"261140c0-a32c-4656-914c-7b6c9f0c8968\") " Jan 26 00:32:04 crc kubenswrapper[5107]: I0126 00:32:04.512914 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/261140c0-a32c-4656-914c-7b6c9f0c8968-kube-api-access-7l6sv" (OuterVolumeSpecName: "kube-api-access-7l6sv") pod "261140c0-a32c-4656-914c-7b6c9f0c8968" (UID: "261140c0-a32c-4656-914c-7b6c9f0c8968"). InnerVolumeSpecName "kube-api-access-7l6sv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:32:04 crc kubenswrapper[5107]: I0126 00:32:04.606982 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7l6sv\" (UniqueName: \"kubernetes.io/projected/261140c0-a32c-4656-914c-7b6c9f0c8968-kube-api-access-7l6sv\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:05 crc kubenswrapper[5107]: I0126 00:32:05.028939 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489792-9kxhk" event={"ID":"261140c0-a32c-4656-914c-7b6c9f0c8968","Type":"ContainerDied","Data":"b50e3aa08a87278c88cbd490e88ea3da3ccc3d09ebda1e246690145a0a2b1315"} Jan 26 00:32:05 crc kubenswrapper[5107]: I0126 00:32:05.028996 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489792-9kxhk" Jan 26 00:32:05 crc kubenswrapper[5107]: I0126 00:32:05.029012 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b50e3aa08a87278c88cbd490e88ea3da3ccc3d09ebda1e246690145a0a2b1315" Jan 26 00:32:05 crc kubenswrapper[5107]: I0126 00:32:05.399401 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489786-8l48g"] Jan 26 00:32:05 crc kubenswrapper[5107]: I0126 00:32:05.407030 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489786-8l48g"] Jan 26 00:32:06 crc kubenswrapper[5107]: I0126 00:32:06.121923 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f5d729a-317b-4713-a3c3-c5cc309edc5e" path="/var/lib/kubelet/pods/9f5d729a-317b-4713-a3c3-c5cc309edc5e/volumes" Jan 26 00:32:14 crc kubenswrapper[5107]: I0126 00:32:14.100366 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_6194cf20-381a-4030-a802-413bdf580aca/docker-build/0.log" Jan 26 00:32:14 crc kubenswrapper[5107]: I0126 00:32:14.102597 5107 generic.go:358] "Generic (PLEG): container finished" podID="6194cf20-381a-4030-a802-413bdf580aca" containerID="975ab1fa8970e5319e6e83feffe3a8c9cf327b4784fe39119c381d5655378fdf" exitCode=1 Jan 26 00:32:14 crc kubenswrapper[5107]: I0126 00:32:14.102708 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"6194cf20-381a-4030-a802-413bdf580aca","Type":"ContainerDied","Data":"975ab1fa8970e5319e6e83feffe3a8c9cf327b4784fe39119c381d5655378fdf"} Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.353775 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_6194cf20-381a-4030-a802-413bdf580aca/docker-build/0.log" Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.355439 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.389069 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6194cf20-381a-4030-a802-413bdf580aca-buildworkdir\") pod \"6194cf20-381a-4030-a802-413bdf580aca\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.389165 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6194cf20-381a-4030-a802-413bdf580aca-build-ca-bundles\") pod \"6194cf20-381a-4030-a802-413bdf580aca\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.389246 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jcr8\" (UniqueName: \"kubernetes.io/projected/6194cf20-381a-4030-a802-413bdf580aca-kube-api-access-9jcr8\") pod \"6194cf20-381a-4030-a802-413bdf580aca\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.389290 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6194cf20-381a-4030-a802-413bdf580aca-buildcachedir\") pod \"6194cf20-381a-4030-a802-413bdf580aca\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.389346 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6194cf20-381a-4030-a802-413bdf580aca-build-blob-cache\") pod \"6194cf20-381a-4030-a802-413bdf580aca\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.389397 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-x24l2-pull\" (UniqueName: \"kubernetes.io/secret/6194cf20-381a-4030-a802-413bdf580aca-builder-dockercfg-x24l2-pull\") pod \"6194cf20-381a-4030-a802-413bdf580aca\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.389421 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-x24l2-push\" (UniqueName: \"kubernetes.io/secret/6194cf20-381a-4030-a802-413bdf580aca-builder-dockercfg-x24l2-push\") pod \"6194cf20-381a-4030-a802-413bdf580aca\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.389443 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6194cf20-381a-4030-a802-413bdf580aca-node-pullsecrets\") pod \"6194cf20-381a-4030-a802-413bdf580aca\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.389491 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6194cf20-381a-4030-a802-413bdf580aca-build-system-configs\") pod \"6194cf20-381a-4030-a802-413bdf580aca\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.389538 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6194cf20-381a-4030-a802-413bdf580aca-container-storage-root\") pod \"6194cf20-381a-4030-a802-413bdf580aca\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.389625 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6194cf20-381a-4030-a802-413bdf580aca-build-proxy-ca-bundles\") pod \"6194cf20-381a-4030-a802-413bdf580aca\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.389646 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6194cf20-381a-4030-a802-413bdf580aca-container-storage-run\") pod \"6194cf20-381a-4030-a802-413bdf580aca\" (UID: \"6194cf20-381a-4030-a802-413bdf580aca\") " Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.390615 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6194cf20-381a-4030-a802-413bdf580aca-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "6194cf20-381a-4030-a802-413bdf580aca" (UID: "6194cf20-381a-4030-a802-413bdf580aca"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.391010 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6194cf20-381a-4030-a802-413bdf580aca-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "6194cf20-381a-4030-a802-413bdf580aca" (UID: "6194cf20-381a-4030-a802-413bdf580aca"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.391259 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6194cf20-381a-4030-a802-413bdf580aca-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "6194cf20-381a-4030-a802-413bdf580aca" (UID: "6194cf20-381a-4030-a802-413bdf580aca"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.391301 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6194cf20-381a-4030-a802-413bdf580aca-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "6194cf20-381a-4030-a802-413bdf580aca" (UID: "6194cf20-381a-4030-a802-413bdf580aca"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.391704 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6194cf20-381a-4030-a802-413bdf580aca-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "6194cf20-381a-4030-a802-413bdf580aca" (UID: "6194cf20-381a-4030-a802-413bdf580aca"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.391751 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6194cf20-381a-4030-a802-413bdf580aca-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "6194cf20-381a-4030-a802-413bdf580aca" (UID: "6194cf20-381a-4030-a802-413bdf580aca"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.401084 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6194cf20-381a-4030-a802-413bdf580aca-builder-dockercfg-x24l2-pull" (OuterVolumeSpecName: "builder-dockercfg-x24l2-pull") pod "6194cf20-381a-4030-a802-413bdf580aca" (UID: "6194cf20-381a-4030-a802-413bdf580aca"). InnerVolumeSpecName "builder-dockercfg-x24l2-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.401180 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6194cf20-381a-4030-a802-413bdf580aca-kube-api-access-9jcr8" (OuterVolumeSpecName: "kube-api-access-9jcr8") pod "6194cf20-381a-4030-a802-413bdf580aca" (UID: "6194cf20-381a-4030-a802-413bdf580aca"). InnerVolumeSpecName "kube-api-access-9jcr8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.402068 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6194cf20-381a-4030-a802-413bdf580aca-builder-dockercfg-x24l2-push" (OuterVolumeSpecName: "builder-dockercfg-x24l2-push") pod "6194cf20-381a-4030-a802-413bdf580aca" (UID: "6194cf20-381a-4030-a802-413bdf580aca"). InnerVolumeSpecName "builder-dockercfg-x24l2-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.431264 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6194cf20-381a-4030-a802-413bdf580aca-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "6194cf20-381a-4030-a802-413bdf580aca" (UID: "6194cf20-381a-4030-a802-413bdf580aca"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.491831 5107 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6194cf20-381a-4030-a802-413bdf580aca-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.491885 5107 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6194cf20-381a-4030-a802-413bdf580aca-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.491911 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6194cf20-381a-4030-a802-413bdf580aca-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.491925 5107 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6194cf20-381a-4030-a802-413bdf580aca-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.491935 5107 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6194cf20-381a-4030-a802-413bdf580aca-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.491949 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9jcr8\" (UniqueName: \"kubernetes.io/projected/6194cf20-381a-4030-a802-413bdf580aca-kube-api-access-9jcr8\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.491959 5107 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6194cf20-381a-4030-a802-413bdf580aca-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.491971 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-x24l2-pull\" (UniqueName: \"kubernetes.io/secret/6194cf20-381a-4030-a802-413bdf580aca-builder-dockercfg-x24l2-pull\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.491982 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-x24l2-push\" (UniqueName: \"kubernetes.io/secret/6194cf20-381a-4030-a802-413bdf580aca-builder-dockercfg-x24l2-push\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.491991 5107 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6194cf20-381a-4030-a802-413bdf580aca-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.636422 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6194cf20-381a-4030-a802-413bdf580aca-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "6194cf20-381a-4030-a802-413bdf580aca" (UID: "6194cf20-381a-4030-a802-413bdf580aca"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:32:15 crc kubenswrapper[5107]: I0126 00:32:15.695709 5107 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6194cf20-381a-4030-a802-413bdf580aca-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:16 crc kubenswrapper[5107]: I0126 00:32:16.124267 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_6194cf20-381a-4030-a802-413bdf580aca/docker-build/0.log" Jan 26 00:32:16 crc kubenswrapper[5107]: I0126 00:32:16.126084 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:32:16 crc kubenswrapper[5107]: I0126 00:32:16.131047 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"6194cf20-381a-4030-a802-413bdf580aca","Type":"ContainerDied","Data":"c2f500fe408e3b6cb4e1434a13650a99332e498ba9db524aa3880723c727cb84"} Jan 26 00:32:16 crc kubenswrapper[5107]: I0126 00:32:16.131107 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2f500fe408e3b6cb4e1434a13650a99332e498ba9db524aa3880723c727cb84" Jan 26 00:32:17 crc kubenswrapper[5107]: I0126 00:32:17.312664 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6194cf20-381a-4030-a802-413bdf580aca-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "6194cf20-381a-4030-a802-413bdf580aca" (UID: "6194cf20-381a-4030-a802-413bdf580aca"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:32:17 crc kubenswrapper[5107]: I0126 00:32:17.323845 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6194cf20-381a-4030-a802-413bdf580aca-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.484116 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.485406 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6194cf20-381a-4030-a802-413bdf580aca" containerName="git-clone" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.485426 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="6194cf20-381a-4030-a802-413bdf580aca" containerName="git-clone" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.485466 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6194cf20-381a-4030-a802-413bdf580aca" containerName="docker-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.485472 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="6194cf20-381a-4030-a802-413bdf580aca" containerName="docker-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.485492 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="261140c0-a32c-4656-914c-7b6c9f0c8968" containerName="oc" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.485498 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="261140c0-a32c-4656-914c-7b6c9f0c8968" containerName="oc" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.485506 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6194cf20-381a-4030-a802-413bdf580aca" containerName="manage-dockerfile" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.485511 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="6194cf20-381a-4030-a802-413bdf580aca" containerName="manage-dockerfile" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.485615 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="261140c0-a32c-4656-914c-7b6c9f0c8968" containerName="oc" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.485627 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="6194cf20-381a-4030-a802-413bdf580aca" containerName="docker-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.538186 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.538515 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.542777 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-ca\"" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.542902 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-sys-config\"" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.542911 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-global-ca\"" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.543267 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-x24l2\"" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.677444 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dd22b8af-ee94-4947-914f-6405029c6104-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.677645 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dd22b8af-ee94-4947-914f-6405029c6104-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.677735 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-x24l2-push\" (UniqueName: \"kubernetes.io/secret/dd22b8af-ee94-4947-914f-6405029c6104-builder-dockercfg-x24l2-push\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.677780 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dd22b8af-ee94-4947-914f-6405029c6104-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.677845 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dd22b8af-ee94-4947-914f-6405029c6104-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.677901 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dd22b8af-ee94-4947-914f-6405029c6104-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.678033 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd22b8af-ee94-4947-914f-6405029c6104-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.678207 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dd22b8af-ee94-4947-914f-6405029c6104-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.678259 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-x24l2-pull\" (UniqueName: \"kubernetes.io/secret/dd22b8af-ee94-4947-914f-6405029c6104-builder-dockercfg-x24l2-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.678422 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd22b8af-ee94-4947-914f-6405029c6104-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.678539 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwtc7\" (UniqueName: \"kubernetes.io/projected/dd22b8af-ee94-4947-914f-6405029c6104-kube-api-access-gwtc7\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.678576 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dd22b8af-ee94-4947-914f-6405029c6104-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.779841 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd22b8af-ee94-4947-914f-6405029c6104-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.780002 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dd22b8af-ee94-4947-914f-6405029c6104-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.780032 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-x24l2-pull\" (UniqueName: \"kubernetes.io/secret/dd22b8af-ee94-4947-914f-6405029c6104-builder-dockercfg-x24l2-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.780059 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd22b8af-ee94-4947-914f-6405029c6104-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.780095 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwtc7\" (UniqueName: \"kubernetes.io/projected/dd22b8af-ee94-4947-914f-6405029c6104-kube-api-access-gwtc7\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.780253 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dd22b8af-ee94-4947-914f-6405029c6104-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.780328 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dd22b8af-ee94-4947-914f-6405029c6104-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.780357 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dd22b8af-ee94-4947-914f-6405029c6104-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.780386 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-x24l2-push\" (UniqueName: \"kubernetes.io/secret/dd22b8af-ee94-4947-914f-6405029c6104-builder-dockercfg-x24l2-push\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.780418 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dd22b8af-ee94-4947-914f-6405029c6104-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.780439 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dd22b8af-ee94-4947-914f-6405029c6104-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.780496 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dd22b8af-ee94-4947-914f-6405029c6104-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.780928 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dd22b8af-ee94-4947-914f-6405029c6104-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.781279 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd22b8af-ee94-4947-914f-6405029c6104-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.781211 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd22b8af-ee94-4947-914f-6405029c6104-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.781428 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dd22b8af-ee94-4947-914f-6405029c6104-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.781587 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dd22b8af-ee94-4947-914f-6405029c6104-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.781722 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dd22b8af-ee94-4947-914f-6405029c6104-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.781760 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dd22b8af-ee94-4947-914f-6405029c6104-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.781825 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dd22b8af-ee94-4947-914f-6405029c6104-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.782475 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dd22b8af-ee94-4947-914f-6405029c6104-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.789074 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-x24l2-pull\" (UniqueName: \"kubernetes.io/secret/dd22b8af-ee94-4947-914f-6405029c6104-builder-dockercfg-x24l2-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.790094 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-x24l2-push\" (UniqueName: \"kubernetes.io/secret/dd22b8af-ee94-4947-914f-6405029c6104-builder-dockercfg-x24l2-push\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.799942 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwtc7\" (UniqueName: \"kubernetes.io/projected/dd22b8af-ee94-4947-914f-6405029c6104-kube-api-access-gwtc7\") pod \"service-telemetry-operator-5-build\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:26 crc kubenswrapper[5107]: I0126 00:32:26.862607 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:27 crc kubenswrapper[5107]: I0126 00:32:27.091013 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Jan 26 00:32:27 crc kubenswrapper[5107]: I0126 00:32:27.223123 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"dd22b8af-ee94-4947-914f-6405029c6104","Type":"ContainerStarted","Data":"d042fb8d711f3063e37779250d6b2409ef2fd4cbbfcba3a0694389729d2dde43"} Jan 26 00:32:28 crc kubenswrapper[5107]: I0126 00:32:28.233542 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"dd22b8af-ee94-4947-914f-6405029c6104","Type":"ContainerStarted","Data":"5df54873dcf491e7286f0cdcb67c63cc3e4361846b7f14bf7a100954be66ba68"} Jan 26 00:32:37 crc kubenswrapper[5107]: I0126 00:32:37.318239 5107 generic.go:358] "Generic (PLEG): container finished" podID="dd22b8af-ee94-4947-914f-6405029c6104" containerID="5df54873dcf491e7286f0cdcb67c63cc3e4361846b7f14bf7a100954be66ba68" exitCode=0 Jan 26 00:32:37 crc kubenswrapper[5107]: I0126 00:32:37.318361 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"dd22b8af-ee94-4947-914f-6405029c6104","Type":"ContainerDied","Data":"5df54873dcf491e7286f0cdcb67c63cc3e4361846b7f14bf7a100954be66ba68"} Jan 26 00:32:38 crc kubenswrapper[5107]: I0126 00:32:38.330463 5107 generic.go:358] "Generic (PLEG): container finished" podID="dd22b8af-ee94-4947-914f-6405029c6104" containerID="62c791d28d39a04db957abcc13878fc5088aacb94ae2f6a3e76c11847fe93f0b" exitCode=0 Jan 26 00:32:38 crc kubenswrapper[5107]: I0126 00:32:38.330565 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"dd22b8af-ee94-4947-914f-6405029c6104","Type":"ContainerDied","Data":"62c791d28d39a04db957abcc13878fc5088aacb94ae2f6a3e76c11847fe93f0b"} Jan 26 00:32:38 crc kubenswrapper[5107]: I0126 00:32:38.364433 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_dd22b8af-ee94-4947-914f-6405029c6104/manage-dockerfile/0.log" Jan 26 00:32:39 crc kubenswrapper[5107]: I0126 00:32:39.344577 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"dd22b8af-ee94-4947-914f-6405029c6104","Type":"ContainerStarted","Data":"b6516e5bc101f6d3fee0e52c4f2107e210ebfa5baf9ae8aef967d590787616ab"} Jan 26 00:32:39 crc kubenswrapper[5107]: I0126 00:32:39.380311 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-5-build" podStartSLOduration=13.380282056 podStartE2EDuration="13.380282056s" podCreationTimestamp="2026-01-26 00:32:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:32:39.375764778 +0000 UTC m=+1404.293359144" watchObservedRunningTime="2026-01-26 00:32:39.380282056 +0000 UTC m=+1404.297876402" Jan 26 00:33:11 crc kubenswrapper[5107]: I0126 00:33:11.944020 5107 scope.go:117] "RemoveContainer" containerID="a8cf6d7bd24a367b398879ebee3516aaf3b4804a916e585d85d99f70fe28e350" Jan 26 00:33:50 crc kubenswrapper[5107]: I0126 00:33:50.951786 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_dd22b8af-ee94-4947-914f-6405029c6104/docker-build/0.log" Jan 26 00:33:50 crc kubenswrapper[5107]: I0126 00:33:50.954023 5107 generic.go:358] "Generic (PLEG): container finished" podID="dd22b8af-ee94-4947-914f-6405029c6104" containerID="b6516e5bc101f6d3fee0e52c4f2107e210ebfa5baf9ae8aef967d590787616ab" exitCode=1 Jan 26 00:33:50 crc kubenswrapper[5107]: I0126 00:33:50.954221 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"dd22b8af-ee94-4947-914f-6405029c6104","Type":"ContainerDied","Data":"b6516e5bc101f6d3fee0e52c4f2107e210ebfa5baf9ae8aef967d590787616ab"} Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.230798 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_dd22b8af-ee94-4947-914f-6405029c6104/docker-build/0.log" Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.231989 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.396059 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dd22b8af-ee94-4947-914f-6405029c6104-build-system-configs\") pod \"dd22b8af-ee94-4947-914f-6405029c6104\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.396202 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd22b8af-ee94-4947-914f-6405029c6104-build-ca-bundles\") pod \"dd22b8af-ee94-4947-914f-6405029c6104\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.396251 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-x24l2-push\" (UniqueName: \"kubernetes.io/secret/dd22b8af-ee94-4947-914f-6405029c6104-builder-dockercfg-x24l2-push\") pod \"dd22b8af-ee94-4947-914f-6405029c6104\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.396284 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-x24l2-pull\" (UniqueName: \"kubernetes.io/secret/dd22b8af-ee94-4947-914f-6405029c6104-builder-dockercfg-x24l2-pull\") pod \"dd22b8af-ee94-4947-914f-6405029c6104\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.396342 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dd22b8af-ee94-4947-914f-6405029c6104-container-storage-root\") pod \"dd22b8af-ee94-4947-914f-6405029c6104\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.396459 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dd22b8af-ee94-4947-914f-6405029c6104-container-storage-run\") pod \"dd22b8af-ee94-4947-914f-6405029c6104\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.397526 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dd22b8af-ee94-4947-914f-6405029c6104-buildcachedir\") pod \"dd22b8af-ee94-4947-914f-6405029c6104\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.397582 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd22b8af-ee94-4947-914f-6405029c6104-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "dd22b8af-ee94-4947-914f-6405029c6104" (UID: "dd22b8af-ee94-4947-914f-6405029c6104"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.397658 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd22b8af-ee94-4947-914f-6405029c6104-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "dd22b8af-ee94-4947-914f-6405029c6104" (UID: "dd22b8af-ee94-4947-914f-6405029c6104"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.397742 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dd22b8af-ee94-4947-914f-6405029c6104-node-pullsecrets\") pod \"dd22b8af-ee94-4947-914f-6405029c6104\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.397778 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd22b8af-ee94-4947-914f-6405029c6104-build-proxy-ca-bundles\") pod \"dd22b8af-ee94-4947-914f-6405029c6104\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.397767 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd22b8af-ee94-4947-914f-6405029c6104-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "dd22b8af-ee94-4947-914f-6405029c6104" (UID: "dd22b8af-ee94-4947-914f-6405029c6104"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.397820 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dd22b8af-ee94-4947-914f-6405029c6104-build-blob-cache\") pod \"dd22b8af-ee94-4947-914f-6405029c6104\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.397845 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd22b8af-ee94-4947-914f-6405029c6104-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "dd22b8af-ee94-4947-914f-6405029c6104" (UID: "dd22b8af-ee94-4947-914f-6405029c6104"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.397952 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwtc7\" (UniqueName: \"kubernetes.io/projected/dd22b8af-ee94-4947-914f-6405029c6104-kube-api-access-gwtc7\") pod \"dd22b8af-ee94-4947-914f-6405029c6104\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.397995 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dd22b8af-ee94-4947-914f-6405029c6104-buildworkdir\") pod \"dd22b8af-ee94-4947-914f-6405029c6104\" (UID: \"dd22b8af-ee94-4947-914f-6405029c6104\") " Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.398416 5107 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dd22b8af-ee94-4947-914f-6405029c6104-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.398444 5107 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd22b8af-ee94-4947-914f-6405029c6104-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.398456 5107 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dd22b8af-ee94-4947-914f-6405029c6104-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.398469 5107 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dd22b8af-ee94-4947-914f-6405029c6104-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.398700 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd22b8af-ee94-4947-914f-6405029c6104-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "dd22b8af-ee94-4947-914f-6405029c6104" (UID: "dd22b8af-ee94-4947-914f-6405029c6104"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.399302 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd22b8af-ee94-4947-914f-6405029c6104-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "dd22b8af-ee94-4947-914f-6405029c6104" (UID: "dd22b8af-ee94-4947-914f-6405029c6104"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.411189 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd22b8af-ee94-4947-914f-6405029c6104-builder-dockercfg-x24l2-push" (OuterVolumeSpecName: "builder-dockercfg-x24l2-push") pod "dd22b8af-ee94-4947-914f-6405029c6104" (UID: "dd22b8af-ee94-4947-914f-6405029c6104"). InnerVolumeSpecName "builder-dockercfg-x24l2-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.412328 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd22b8af-ee94-4947-914f-6405029c6104-kube-api-access-gwtc7" (OuterVolumeSpecName: "kube-api-access-gwtc7") pod "dd22b8af-ee94-4947-914f-6405029c6104" (UID: "dd22b8af-ee94-4947-914f-6405029c6104"). InnerVolumeSpecName "kube-api-access-gwtc7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.414116 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd22b8af-ee94-4947-914f-6405029c6104-builder-dockercfg-x24l2-pull" (OuterVolumeSpecName: "builder-dockercfg-x24l2-pull") pod "dd22b8af-ee94-4947-914f-6405029c6104" (UID: "dd22b8af-ee94-4947-914f-6405029c6104"). InnerVolumeSpecName "builder-dockercfg-x24l2-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.438733 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd22b8af-ee94-4947-914f-6405029c6104-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "dd22b8af-ee94-4947-914f-6405029c6104" (UID: "dd22b8af-ee94-4947-914f-6405029c6104"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.500243 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-x24l2-push\" (UniqueName: \"kubernetes.io/secret/dd22b8af-ee94-4947-914f-6405029c6104-builder-dockercfg-x24l2-push\") on node \"crc\" DevicePath \"\"" Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.500285 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-x24l2-pull\" (UniqueName: \"kubernetes.io/secret/dd22b8af-ee94-4947-914f-6405029c6104-builder-dockercfg-x24l2-pull\") on node \"crc\" DevicePath \"\"" Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.500298 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dd22b8af-ee94-4947-914f-6405029c6104-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.500312 5107 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd22b8af-ee94-4947-914f-6405029c6104-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.500325 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gwtc7\" (UniqueName: \"kubernetes.io/projected/dd22b8af-ee94-4947-914f-6405029c6104-kube-api-access-gwtc7\") on node \"crc\" DevicePath \"\"" Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.500337 5107 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dd22b8af-ee94-4947-914f-6405029c6104-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.643896 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd22b8af-ee94-4947-914f-6405029c6104-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "dd22b8af-ee94-4947-914f-6405029c6104" (UID: "dd22b8af-ee94-4947-914f-6405029c6104"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.704430 5107 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dd22b8af-ee94-4947-914f-6405029c6104-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.974524 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_dd22b8af-ee94-4947-914f-6405029c6104/docker-build/0.log" Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.976062 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"dd22b8af-ee94-4947-914f-6405029c6104","Type":"ContainerDied","Data":"d042fb8d711f3063e37779250d6b2409ef2fd4cbbfcba3a0694389729d2dde43"} Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.976104 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:33:52 crc kubenswrapper[5107]: I0126 00:33:52.976118 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d042fb8d711f3063e37779250d6b2409ef2fd4cbbfcba3a0694389729d2dde43" Jan 26 00:33:54 crc kubenswrapper[5107]: I0126 00:33:54.181115 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd22b8af-ee94-4947-914f-6405029c6104-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "dd22b8af-ee94-4947-914f-6405029c6104" (UID: "dd22b8af-ee94-4947-914f-6405029c6104"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:33:54 crc kubenswrapper[5107]: I0126 00:33:54.242969 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dd22b8af-ee94-4947-914f-6405029c6104-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:00 crc kubenswrapper[5107]: I0126 00:34:00.182415 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489794-64s5r"] Jan 26 00:34:00 crc kubenswrapper[5107]: I0126 00:34:00.183821 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd22b8af-ee94-4947-914f-6405029c6104" containerName="manage-dockerfile" Jan 26 00:34:00 crc kubenswrapper[5107]: I0126 00:34:00.183848 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd22b8af-ee94-4947-914f-6405029c6104" containerName="manage-dockerfile" Jan 26 00:34:00 crc kubenswrapper[5107]: I0126 00:34:00.183872 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd22b8af-ee94-4947-914f-6405029c6104" containerName="docker-build" Jan 26 00:34:00 crc kubenswrapper[5107]: I0126 00:34:00.183878 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd22b8af-ee94-4947-914f-6405029c6104" containerName="docker-build" Jan 26 00:34:00 crc kubenswrapper[5107]: I0126 00:34:00.183976 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd22b8af-ee94-4947-914f-6405029c6104" containerName="git-clone" Jan 26 00:34:00 crc kubenswrapper[5107]: I0126 00:34:00.183982 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd22b8af-ee94-4947-914f-6405029c6104" containerName="git-clone" Jan 26 00:34:00 crc kubenswrapper[5107]: I0126 00:34:00.184112 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="dd22b8af-ee94-4947-914f-6405029c6104" containerName="docker-build" Jan 26 00:34:00 crc kubenswrapper[5107]: I0126 00:34:00.301294 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489794-64s5r"] Jan 26 00:34:00 crc kubenswrapper[5107]: I0126 00:34:00.301501 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489794-64s5r" Jan 26 00:34:00 crc kubenswrapper[5107]: I0126 00:34:00.304052 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-96gbq\"" Jan 26 00:34:00 crc kubenswrapper[5107]: I0126 00:34:00.304468 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:34:00 crc kubenswrapper[5107]: I0126 00:34:00.305057 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:34:00 crc kubenswrapper[5107]: I0126 00:34:00.440243 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffk8p\" (UniqueName: \"kubernetes.io/projected/e6806ef1-b69b-4794-8fb2-d10d84202004-kube-api-access-ffk8p\") pod \"auto-csr-approver-29489794-64s5r\" (UID: \"e6806ef1-b69b-4794-8fb2-d10d84202004\") " pod="openshift-infra/auto-csr-approver-29489794-64s5r" Jan 26 00:34:00 crc kubenswrapper[5107]: I0126 00:34:00.542269 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ffk8p\" (UniqueName: \"kubernetes.io/projected/e6806ef1-b69b-4794-8fb2-d10d84202004-kube-api-access-ffk8p\") pod \"auto-csr-approver-29489794-64s5r\" (UID: \"e6806ef1-b69b-4794-8fb2-d10d84202004\") " pod="openshift-infra/auto-csr-approver-29489794-64s5r" Jan 26 00:34:00 crc kubenswrapper[5107]: I0126 00:34:00.563579 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffk8p\" (UniqueName: \"kubernetes.io/projected/e6806ef1-b69b-4794-8fb2-d10d84202004-kube-api-access-ffk8p\") pod \"auto-csr-approver-29489794-64s5r\" (UID: \"e6806ef1-b69b-4794-8fb2-d10d84202004\") " pod="openshift-infra/auto-csr-approver-29489794-64s5r" Jan 26 00:34:00 crc kubenswrapper[5107]: I0126 00:34:00.629812 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489794-64s5r" Jan 26 00:34:00 crc kubenswrapper[5107]: I0126 00:34:00.848756 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489794-64s5r"] Jan 26 00:34:01 crc kubenswrapper[5107]: I0126 00:34:01.067288 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489794-64s5r" event={"ID":"e6806ef1-b69b-4794-8fb2-d10d84202004","Type":"ContainerStarted","Data":"9b4f86d36743062534cb02ef5e63b7b5a4b3415992892727f4e3f8f67e85e32d"} Jan 26 00:34:02 crc kubenswrapper[5107]: I0126 00:34:02.077688 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489794-64s5r" event={"ID":"e6806ef1-b69b-4794-8fb2-d10d84202004","Type":"ContainerStarted","Data":"ea3accc3ba8fee255b8aa2aa042d54156e04374e4730b1b223c0959b34050abd"} Jan 26 00:34:02 crc kubenswrapper[5107]: I0126 00:34:02.095431 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29489794-64s5r" podStartSLOduration=1.193950433 podStartE2EDuration="2.09540794s" podCreationTimestamp="2026-01-26 00:34:00 +0000 UTC" firstStartedPulling="2026-01-26 00:34:00.860583771 +0000 UTC m=+1485.778178117" lastFinishedPulling="2026-01-26 00:34:01.762041278 +0000 UTC m=+1486.679635624" observedRunningTime="2026-01-26 00:34:02.092788396 +0000 UTC m=+1487.010382742" watchObservedRunningTime="2026-01-26 00:34:02.09540794 +0000 UTC m=+1487.013002286" Jan 26 00:34:03 crc kubenswrapper[5107]: I0126 00:34:03.090394 5107 generic.go:358] "Generic (PLEG): container finished" podID="e6806ef1-b69b-4794-8fb2-d10d84202004" containerID="ea3accc3ba8fee255b8aa2aa042d54156e04374e4730b1b223c0959b34050abd" exitCode=0 Jan 26 00:34:03 crc kubenswrapper[5107]: I0126 00:34:03.090538 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489794-64s5r" event={"ID":"e6806ef1-b69b-4794-8fb2-d10d84202004","Type":"ContainerDied","Data":"ea3accc3ba8fee255b8aa2aa042d54156e04374e4730b1b223c0959b34050abd"} Jan 26 00:34:04 crc kubenswrapper[5107]: I0126 00:34:04.340437 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489794-64s5r" Jan 26 00:34:04 crc kubenswrapper[5107]: I0126 00:34:04.508157 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffk8p\" (UniqueName: \"kubernetes.io/projected/e6806ef1-b69b-4794-8fb2-d10d84202004-kube-api-access-ffk8p\") pod \"e6806ef1-b69b-4794-8fb2-d10d84202004\" (UID: \"e6806ef1-b69b-4794-8fb2-d10d84202004\") " Jan 26 00:34:04 crc kubenswrapper[5107]: I0126 00:34:04.515997 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6806ef1-b69b-4794-8fb2-d10d84202004-kube-api-access-ffk8p" (OuterVolumeSpecName: "kube-api-access-ffk8p") pod "e6806ef1-b69b-4794-8fb2-d10d84202004" (UID: "e6806ef1-b69b-4794-8fb2-d10d84202004"). InnerVolumeSpecName "kube-api-access-ffk8p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:34:04 crc kubenswrapper[5107]: I0126 00:34:04.610542 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ffk8p\" (UniqueName: \"kubernetes.io/projected/e6806ef1-b69b-4794-8fb2-d10d84202004-kube-api-access-ffk8p\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:04 crc kubenswrapper[5107]: I0126 00:34:04.895731 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b9jbm"] Jan 26 00:34:04 crc kubenswrapper[5107]: I0126 00:34:04.896706 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e6806ef1-b69b-4794-8fb2-d10d84202004" containerName="oc" Jan 26 00:34:04 crc kubenswrapper[5107]: I0126 00:34:04.896726 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6806ef1-b69b-4794-8fb2-d10d84202004" containerName="oc" Jan 26 00:34:04 crc kubenswrapper[5107]: I0126 00:34:04.896931 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="e6806ef1-b69b-4794-8fb2-d10d84202004" containerName="oc" Jan 26 00:34:04 crc kubenswrapper[5107]: I0126 00:34:04.917865 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b9jbm"] Jan 26 00:34:04 crc kubenswrapper[5107]: I0126 00:34:04.918125 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b9jbm" Jan 26 00:34:05 crc kubenswrapper[5107]: I0126 00:34:05.016714 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvkmj\" (UniqueName: \"kubernetes.io/projected/9ddb99a4-59d7-46da-985f-45aea4e4c7a2-kube-api-access-lvkmj\") pod \"certified-operators-b9jbm\" (UID: \"9ddb99a4-59d7-46da-985f-45aea4e4c7a2\") " pod="openshift-marketplace/certified-operators-b9jbm" Jan 26 00:34:05 crc kubenswrapper[5107]: I0126 00:34:05.016805 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ddb99a4-59d7-46da-985f-45aea4e4c7a2-catalog-content\") pod \"certified-operators-b9jbm\" (UID: \"9ddb99a4-59d7-46da-985f-45aea4e4c7a2\") " pod="openshift-marketplace/certified-operators-b9jbm" Jan 26 00:34:05 crc kubenswrapper[5107]: I0126 00:34:05.016839 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ddb99a4-59d7-46da-985f-45aea4e4c7a2-utilities\") pod \"certified-operators-b9jbm\" (UID: \"9ddb99a4-59d7-46da-985f-45aea4e4c7a2\") " pod="openshift-marketplace/certified-operators-b9jbm" Jan 26 00:34:05 crc kubenswrapper[5107]: I0126 00:34:05.088522 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r6gf6"] Jan 26 00:34:05 crc kubenswrapper[5107]: I0126 00:34:05.093691 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r6gf6" Jan 26 00:34:05 crc kubenswrapper[5107]: I0126 00:34:05.113244 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489794-64s5r" event={"ID":"e6806ef1-b69b-4794-8fb2-d10d84202004","Type":"ContainerDied","Data":"9b4f86d36743062534cb02ef5e63b7b5a4b3415992892727f4e3f8f67e85e32d"} Jan 26 00:34:05 crc kubenswrapper[5107]: I0126 00:34:05.113282 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b4f86d36743062534cb02ef5e63b7b5a4b3415992892727f4e3f8f67e85e32d" Jan 26 00:34:05 crc kubenswrapper[5107]: I0126 00:34:05.113297 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r6gf6"] Jan 26 00:34:05 crc kubenswrapper[5107]: I0126 00:34:05.113352 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489794-64s5r" Jan 26 00:34:05 crc kubenswrapper[5107]: I0126 00:34:05.120128 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ddb99a4-59d7-46da-985f-45aea4e4c7a2-catalog-content\") pod \"certified-operators-b9jbm\" (UID: \"9ddb99a4-59d7-46da-985f-45aea4e4c7a2\") " pod="openshift-marketplace/certified-operators-b9jbm" Jan 26 00:34:05 crc kubenswrapper[5107]: I0126 00:34:05.120208 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ddb99a4-59d7-46da-985f-45aea4e4c7a2-utilities\") pod \"certified-operators-b9jbm\" (UID: \"9ddb99a4-59d7-46da-985f-45aea4e4c7a2\") " pod="openshift-marketplace/certified-operators-b9jbm" Jan 26 00:34:05 crc kubenswrapper[5107]: I0126 00:34:05.120326 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lvkmj\" (UniqueName: \"kubernetes.io/projected/9ddb99a4-59d7-46da-985f-45aea4e4c7a2-kube-api-access-lvkmj\") pod \"certified-operators-b9jbm\" (UID: \"9ddb99a4-59d7-46da-985f-45aea4e4c7a2\") " pod="openshift-marketplace/certified-operators-b9jbm" Jan 26 00:34:05 crc kubenswrapper[5107]: I0126 00:34:05.121547 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ddb99a4-59d7-46da-985f-45aea4e4c7a2-catalog-content\") pod \"certified-operators-b9jbm\" (UID: \"9ddb99a4-59d7-46da-985f-45aea4e4c7a2\") " pod="openshift-marketplace/certified-operators-b9jbm" Jan 26 00:34:05 crc kubenswrapper[5107]: I0126 00:34:05.121855 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ddb99a4-59d7-46da-985f-45aea4e4c7a2-utilities\") pod \"certified-operators-b9jbm\" (UID: \"9ddb99a4-59d7-46da-985f-45aea4e4c7a2\") " pod="openshift-marketplace/certified-operators-b9jbm" Jan 26 00:34:05 crc kubenswrapper[5107]: I0126 00:34:05.267339 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d1cbe9c-e1a9-4434-ab43-d90bbe701abc-catalog-content\") pod \"redhat-operators-r6gf6\" (UID: \"7d1cbe9c-e1a9-4434-ab43-d90bbe701abc\") " pod="openshift-marketplace/redhat-operators-r6gf6" Jan 26 00:34:05 crc kubenswrapper[5107]: I0126 00:34:05.267436 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfzg8\" (UniqueName: \"kubernetes.io/projected/7d1cbe9c-e1a9-4434-ab43-d90bbe701abc-kube-api-access-qfzg8\") pod \"redhat-operators-r6gf6\" (UID: \"7d1cbe9c-e1a9-4434-ab43-d90bbe701abc\") " pod="openshift-marketplace/redhat-operators-r6gf6" Jan 26 00:34:05 crc kubenswrapper[5107]: I0126 00:34:05.267471 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d1cbe9c-e1a9-4434-ab43-d90bbe701abc-utilities\") pod \"redhat-operators-r6gf6\" (UID: \"7d1cbe9c-e1a9-4434-ab43-d90bbe701abc\") " pod="openshift-marketplace/redhat-operators-r6gf6" Jan 26 00:34:05 crc kubenswrapper[5107]: I0126 00:34:05.283261 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvkmj\" (UniqueName: \"kubernetes.io/projected/9ddb99a4-59d7-46da-985f-45aea4e4c7a2-kube-api-access-lvkmj\") pod \"certified-operators-b9jbm\" (UID: \"9ddb99a4-59d7-46da-985f-45aea4e4c7a2\") " pod="openshift-marketplace/certified-operators-b9jbm" Jan 26 00:34:05 crc kubenswrapper[5107]: I0126 00:34:05.314732 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489788-w92s7"] Jan 26 00:34:05 crc kubenswrapper[5107]: I0126 00:34:05.320014 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489788-w92s7"] Jan 26 00:34:05 crc kubenswrapper[5107]: I0126 00:34:05.369344 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d1cbe9c-e1a9-4434-ab43-d90bbe701abc-catalog-content\") pod \"redhat-operators-r6gf6\" (UID: \"7d1cbe9c-e1a9-4434-ab43-d90bbe701abc\") " pod="openshift-marketplace/redhat-operators-r6gf6" Jan 26 00:34:05 crc kubenswrapper[5107]: I0126 00:34:05.369440 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qfzg8\" (UniqueName: \"kubernetes.io/projected/7d1cbe9c-e1a9-4434-ab43-d90bbe701abc-kube-api-access-qfzg8\") pod \"redhat-operators-r6gf6\" (UID: \"7d1cbe9c-e1a9-4434-ab43-d90bbe701abc\") " pod="openshift-marketplace/redhat-operators-r6gf6" Jan 26 00:34:05 crc kubenswrapper[5107]: I0126 00:34:05.369470 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d1cbe9c-e1a9-4434-ab43-d90bbe701abc-utilities\") pod \"redhat-operators-r6gf6\" (UID: \"7d1cbe9c-e1a9-4434-ab43-d90bbe701abc\") " pod="openshift-marketplace/redhat-operators-r6gf6" Jan 26 00:34:05 crc kubenswrapper[5107]: I0126 00:34:05.370206 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d1cbe9c-e1a9-4434-ab43-d90bbe701abc-utilities\") pod \"redhat-operators-r6gf6\" (UID: \"7d1cbe9c-e1a9-4434-ab43-d90bbe701abc\") " pod="openshift-marketplace/redhat-operators-r6gf6" Jan 26 00:34:05 crc kubenswrapper[5107]: I0126 00:34:05.370499 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d1cbe9c-e1a9-4434-ab43-d90bbe701abc-catalog-content\") pod \"redhat-operators-r6gf6\" (UID: \"7d1cbe9c-e1a9-4434-ab43-d90bbe701abc\") " pod="openshift-marketplace/redhat-operators-r6gf6" Jan 26 00:34:05 crc kubenswrapper[5107]: I0126 00:34:05.398029 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfzg8\" (UniqueName: \"kubernetes.io/projected/7d1cbe9c-e1a9-4434-ab43-d90bbe701abc-kube-api-access-qfzg8\") pod \"redhat-operators-r6gf6\" (UID: \"7d1cbe9c-e1a9-4434-ab43-d90bbe701abc\") " pod="openshift-marketplace/redhat-operators-r6gf6" Jan 26 00:34:05 crc kubenswrapper[5107]: I0126 00:34:05.412055 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r6gf6" Jan 26 00:34:05 crc kubenswrapper[5107]: I0126 00:34:05.557357 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b9jbm" Jan 26 00:34:05 crc kubenswrapper[5107]: I0126 00:34:05.703428 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r6gf6"] Jan 26 00:34:06 crc kubenswrapper[5107]: I0126 00:34:06.085532 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b9jbm"] Jan 26 00:34:06 crc kubenswrapper[5107]: W0126 00:34:06.094285 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ddb99a4_59d7_46da_985f_45aea4e4c7a2.slice/crio-0e8f34765f40c19272d969633ec83a984e0abf6778cda3b7b999b334e21d49e7 WatchSource:0}: Error finding container 0e8f34765f40c19272d969633ec83a984e0abf6778cda3b7b999b334e21d49e7: Status 404 returned error can't find the container with id 0e8f34765f40c19272d969633ec83a984e0abf6778cda3b7b999b334e21d49e7 Jan 26 00:34:06 crc kubenswrapper[5107]: I0126 00:34:06.127532 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85c372b6-a6e9-46ec-b505-b7052e081793" path="/var/lib/kubelet/pods/85c372b6-a6e9-46ec-b505-b7052e081793/volumes" Jan 26 00:34:06 crc kubenswrapper[5107]: I0126 00:34:06.132172 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b9jbm" event={"ID":"9ddb99a4-59d7-46da-985f-45aea4e4c7a2","Type":"ContainerStarted","Data":"0e8f34765f40c19272d969633ec83a984e0abf6778cda3b7b999b334e21d49e7"} Jan 26 00:34:06 crc kubenswrapper[5107]: I0126 00:34:06.145940 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r6gf6" event={"ID":"7d1cbe9c-e1a9-4434-ab43-d90bbe701abc","Type":"ContainerDied","Data":"009d2f6fc60746bf7d3cf3ceb7bb4b9e8dd2d50fa7dcc0919c2652b4bf3ef811"} Jan 26 00:34:06 crc kubenswrapper[5107]: I0126 00:34:06.145983 5107 generic.go:358] "Generic (PLEG): container finished" podID="7d1cbe9c-e1a9-4434-ab43-d90bbe701abc" containerID="009d2f6fc60746bf7d3cf3ceb7bb4b9e8dd2d50fa7dcc0919c2652b4bf3ef811" exitCode=0 Jan 26 00:34:06 crc kubenswrapper[5107]: I0126 00:34:06.146706 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r6gf6" event={"ID":"7d1cbe9c-e1a9-4434-ab43-d90bbe701abc","Type":"ContainerStarted","Data":"fdf2d6b14549724748c26e84d56a24358b5722679244a4e50bdabdf768a10a3b"} Jan 26 00:34:07 crc kubenswrapper[5107]: I0126 00:34:07.158979 5107 generic.go:358] "Generic (PLEG): container finished" podID="9ddb99a4-59d7-46da-985f-45aea4e4c7a2" containerID="3197e907c33c49dd7116e8c4944dc0c1b4143f46de94c93b01814a6a2192d689" exitCode=0 Jan 26 00:34:07 crc kubenswrapper[5107]: I0126 00:34:07.159684 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b9jbm" event={"ID":"9ddb99a4-59d7-46da-985f-45aea4e4c7a2","Type":"ContainerDied","Data":"3197e907c33c49dd7116e8c4944dc0c1b4143f46de94c93b01814a6a2192d689"} Jan 26 00:34:07 crc kubenswrapper[5107]: I0126 00:34:07.165106 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r6gf6" event={"ID":"7d1cbe9c-e1a9-4434-ab43-d90bbe701abc","Type":"ContainerStarted","Data":"2ddd548ea3e92def2226a1210b5f2bfd67ad3b66dcd1952ed6944827ca219192"} Jan 26 00:34:07 crc kubenswrapper[5107]: E0126 00:34:07.623068 5107 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d1cbe9c_e1a9_4434_ab43_d90bbe701abc.slice/crio-2ddd548ea3e92def2226a1210b5f2bfd67ad3b66dcd1952ed6944827ca219192.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d1cbe9c_e1a9_4434_ab43_d90bbe701abc.slice/crio-conmon-2ddd548ea3e92def2226a1210b5f2bfd67ad3b66dcd1952ed6944827ca219192.scope\": RecentStats: unable to find data in memory cache]" Jan 26 00:34:08 crc kubenswrapper[5107]: I0126 00:34:08.178309 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b9jbm" event={"ID":"9ddb99a4-59d7-46da-985f-45aea4e4c7a2","Type":"ContainerStarted","Data":"c3d5ea3f4d7ee89105f7dfb5db6098541b5a932107818a9fa4d5e8f9215ee9bf"} Jan 26 00:34:08 crc kubenswrapper[5107]: I0126 00:34:08.181115 5107 generic.go:358] "Generic (PLEG): container finished" podID="7d1cbe9c-e1a9-4434-ab43-d90bbe701abc" containerID="2ddd548ea3e92def2226a1210b5f2bfd67ad3b66dcd1952ed6944827ca219192" exitCode=0 Jan 26 00:34:08 crc kubenswrapper[5107]: I0126 00:34:08.181203 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r6gf6" event={"ID":"7d1cbe9c-e1a9-4434-ab43-d90bbe701abc","Type":"ContainerDied","Data":"2ddd548ea3e92def2226a1210b5f2bfd67ad3b66dcd1952ed6944827ca219192"} Jan 26 00:34:09 crc kubenswrapper[5107]: I0126 00:34:09.194333 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r6gf6" event={"ID":"7d1cbe9c-e1a9-4434-ab43-d90bbe701abc","Type":"ContainerStarted","Data":"f5517d96ad29fe85711a180aba5b6f021eb10f1a490d37023adb94e2f7df145e"} Jan 26 00:34:09 crc kubenswrapper[5107]: I0126 00:34:09.199739 5107 generic.go:358] "Generic (PLEG): container finished" podID="9ddb99a4-59d7-46da-985f-45aea4e4c7a2" containerID="c3d5ea3f4d7ee89105f7dfb5db6098541b5a932107818a9fa4d5e8f9215ee9bf" exitCode=0 Jan 26 00:34:09 crc kubenswrapper[5107]: I0126 00:34:09.199817 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b9jbm" event={"ID":"9ddb99a4-59d7-46da-985f-45aea4e4c7a2","Type":"ContainerDied","Data":"c3d5ea3f4d7ee89105f7dfb5db6098541b5a932107818a9fa4d5e8f9215ee9bf"} Jan 26 00:34:09 crc kubenswrapper[5107]: I0126 00:34:09.219185 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r6gf6" podStartSLOduration=3.5884414700000002 podStartE2EDuration="4.219008402s" podCreationTimestamp="2026-01-26 00:34:05 +0000 UTC" firstStartedPulling="2026-01-26 00:34:06.150414975 +0000 UTC m=+1491.068009321" lastFinishedPulling="2026-01-26 00:34:06.780981907 +0000 UTC m=+1491.698576253" observedRunningTime="2026-01-26 00:34:09.214549686 +0000 UTC m=+1494.132144022" watchObservedRunningTime="2026-01-26 00:34:09.219008402 +0000 UTC m=+1494.136602748" Jan 26 00:34:10 crc kubenswrapper[5107]: I0126 00:34:10.211615 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b9jbm" event={"ID":"9ddb99a4-59d7-46da-985f-45aea4e4c7a2","Type":"ContainerStarted","Data":"d710ef11bf0f12b83b2d8916917852ce6193421964823a982b1e2fe5d93448b5"} Jan 26 00:34:10 crc kubenswrapper[5107]: I0126 00:34:10.246627 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b9jbm" podStartSLOduration=5.5050853669999995 podStartE2EDuration="6.246590054s" podCreationTimestamp="2026-01-26 00:34:04 +0000 UTC" firstStartedPulling="2026-01-26 00:34:07.160962546 +0000 UTC m=+1492.078556892" lastFinishedPulling="2026-01-26 00:34:07.902467233 +0000 UTC m=+1492.820061579" observedRunningTime="2026-01-26 00:34:10.240427999 +0000 UTC m=+1495.158022345" watchObservedRunningTime="2026-01-26 00:34:10.246590054 +0000 UTC m=+1495.164184390" Jan 26 00:34:12 crc kubenswrapper[5107]: I0126 00:34:12.079865 5107 scope.go:117] "RemoveContainer" containerID="fb7fcdf7e8d0844060b5a3803d6d56d1d2efb8aebd463e7b78df930ae05bac0c" Jan 26 00:34:15 crc kubenswrapper[5107]: I0126 00:34:15.413177 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r6gf6" Jan 26 00:34:15 crc kubenswrapper[5107]: I0126 00:34:15.413638 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-r6gf6" Jan 26 00:34:15 crc kubenswrapper[5107]: I0126 00:34:15.470779 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r6gf6" Jan 26 00:34:15 crc kubenswrapper[5107]: I0126 00:34:15.559110 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b9jbm" Jan 26 00:34:15 crc kubenswrapper[5107]: I0126 00:34:15.559220 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-b9jbm" Jan 26 00:34:15 crc kubenswrapper[5107]: I0126 00:34:15.604416 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b9jbm" Jan 26 00:34:16 crc kubenswrapper[5107]: I0126 00:34:16.367954 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b9jbm" Jan 26 00:34:16 crc kubenswrapper[5107]: I0126 00:34:16.368695 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r6gf6" Jan 26 00:34:16 crc kubenswrapper[5107]: I0126 00:34:16.906245 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b9jbm"] Jan 26 00:34:17 crc kubenswrapper[5107]: I0126 00:34:17.227393 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_dd22b8af-ee94-4947-914f-6405029c6104/docker-build/0.log" Jan 26 00:34:17 crc kubenswrapper[5107]: I0126 00:34:17.229951 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_6194cf20-381a-4030-a802-413bdf580aca/docker-build/0.log" Jan 26 00:34:17 crc kubenswrapper[5107]: I0126 00:34:17.231998 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_dd22b8af-ee94-4947-914f-6405029c6104/docker-build/0.log" Jan 26 00:34:17 crc kubenswrapper[5107]: I0126 00:34:17.232345 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_56677a28-74b1-42c7-a42b-1aaf1ebcdc8a/docker-build/0.log" Jan 26 00:34:17 crc kubenswrapper[5107]: I0126 00:34:17.234403 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_6194cf20-381a-4030-a802-413bdf580aca/docker-build/0.log" Jan 26 00:34:17 crc kubenswrapper[5107]: I0126 00:34:17.235984 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_e49e6c4b-b61e-40bf-8b52-2abf782b22df/docker-build/0.log" Jan 26 00:34:17 crc kubenswrapper[5107]: I0126 00:34:17.237030 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_56677a28-74b1-42c7-a42b-1aaf1ebcdc8a/docker-build/0.log" Jan 26 00:34:17 crc kubenswrapper[5107]: I0126 00:34:17.239813 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_e49e6c4b-b61e-40bf-8b52-2abf782b22df/docker-build/0.log" Jan 26 00:34:17 crc kubenswrapper[5107]: I0126 00:34:17.305683 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f2mpq_2e5342d5-2d0c-458d-94b7-25c802ce298a/kube-multus/0.log" Jan 26 00:34:17 crc kubenswrapper[5107]: I0126 00:34:17.307864 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f2mpq_2e5342d5-2d0c-458d-94b7-25c802ce298a/kube-multus/0.log" Jan 26 00:34:17 crc kubenswrapper[5107]: I0126 00:34:17.310817 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Jan 26 00:34:17 crc kubenswrapper[5107]: I0126 00:34:17.313297 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Jan 26 00:34:17 crc kubenswrapper[5107]: I0126 00:34:17.318425 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 26 00:34:17 crc kubenswrapper[5107]: I0126 00:34:17.319985 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 26 00:34:18 crc kubenswrapper[5107]: I0126 00:34:18.339738 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b9jbm" podUID="9ddb99a4-59d7-46da-985f-45aea4e4c7a2" containerName="registry-server" containerID="cri-o://d710ef11bf0f12b83b2d8916917852ce6193421964823a982b1e2fe5d93448b5" gracePeriod=2 Jan 26 00:34:18 crc kubenswrapper[5107]: I0126 00:34:18.708300 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r6gf6"] Jan 26 00:34:18 crc kubenswrapper[5107]: I0126 00:34:18.708835 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r6gf6" podUID="7d1cbe9c-e1a9-4434-ab43-d90bbe701abc" containerName="registry-server" containerID="cri-o://f5517d96ad29fe85711a180aba5b6f021eb10f1a490d37023adb94e2f7df145e" gracePeriod=2 Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.181631 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r6gf6" Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.282581 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d1cbe9c-e1a9-4434-ab43-d90bbe701abc-catalog-content\") pod \"7d1cbe9c-e1a9-4434-ab43-d90bbe701abc\" (UID: \"7d1cbe9c-e1a9-4434-ab43-d90bbe701abc\") " Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.282712 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d1cbe9c-e1a9-4434-ab43-d90bbe701abc-utilities\") pod \"7d1cbe9c-e1a9-4434-ab43-d90bbe701abc\" (UID: \"7d1cbe9c-e1a9-4434-ab43-d90bbe701abc\") " Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.282922 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfzg8\" (UniqueName: \"kubernetes.io/projected/7d1cbe9c-e1a9-4434-ab43-d90bbe701abc-kube-api-access-qfzg8\") pod \"7d1cbe9c-e1a9-4434-ab43-d90bbe701abc\" (UID: \"7d1cbe9c-e1a9-4434-ab43-d90bbe701abc\") " Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.284697 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d1cbe9c-e1a9-4434-ab43-d90bbe701abc-utilities" (OuterVolumeSpecName: "utilities") pod "7d1cbe9c-e1a9-4434-ab43-d90bbe701abc" (UID: "7d1cbe9c-e1a9-4434-ab43-d90bbe701abc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.291489 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d1cbe9c-e1a9-4434-ab43-d90bbe701abc-kube-api-access-qfzg8" (OuterVolumeSpecName: "kube-api-access-qfzg8") pod "7d1cbe9c-e1a9-4434-ab43-d90bbe701abc" (UID: "7d1cbe9c-e1a9-4434-ab43-d90bbe701abc"). InnerVolumeSpecName "kube-api-access-qfzg8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.357529 5107 generic.go:358] "Generic (PLEG): container finished" podID="7d1cbe9c-e1a9-4434-ab43-d90bbe701abc" containerID="f5517d96ad29fe85711a180aba5b6f021eb10f1a490d37023adb94e2f7df145e" exitCode=0 Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.357644 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r6gf6" event={"ID":"7d1cbe9c-e1a9-4434-ab43-d90bbe701abc","Type":"ContainerDied","Data":"f5517d96ad29fe85711a180aba5b6f021eb10f1a490d37023adb94e2f7df145e"} Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.357666 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r6gf6" Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.357706 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r6gf6" event={"ID":"7d1cbe9c-e1a9-4434-ab43-d90bbe701abc","Type":"ContainerDied","Data":"fdf2d6b14549724748c26e84d56a24358b5722679244a4e50bdabdf768a10a3b"} Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.357737 5107 scope.go:117] "RemoveContainer" containerID="f5517d96ad29fe85711a180aba5b6f021eb10f1a490d37023adb94e2f7df145e" Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.368931 5107 generic.go:358] "Generic (PLEG): container finished" podID="9ddb99a4-59d7-46da-985f-45aea4e4c7a2" containerID="d710ef11bf0f12b83b2d8916917852ce6193421964823a982b1e2fe5d93448b5" exitCode=0 Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.369013 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b9jbm" event={"ID":"9ddb99a4-59d7-46da-985f-45aea4e4c7a2","Type":"ContainerDied","Data":"d710ef11bf0f12b83b2d8916917852ce6193421964823a982b1e2fe5d93448b5"} Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.385217 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qfzg8\" (UniqueName: \"kubernetes.io/projected/7d1cbe9c-e1a9-4434-ab43-d90bbe701abc-kube-api-access-qfzg8\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.385271 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d1cbe9c-e1a9-4434-ab43-d90bbe701abc-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.388467 5107 scope.go:117] "RemoveContainer" containerID="2ddd548ea3e92def2226a1210b5f2bfd67ad3b66dcd1952ed6944827ca219192" Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.417909 5107 scope.go:117] "RemoveContainer" containerID="009d2f6fc60746bf7d3cf3ceb7bb4b9e8dd2d50fa7dcc0919c2652b4bf3ef811" Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.430260 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d1cbe9c-e1a9-4434-ab43-d90bbe701abc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d1cbe9c-e1a9-4434-ab43-d90bbe701abc" (UID: "7d1cbe9c-e1a9-4434-ab43-d90bbe701abc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.448266 5107 scope.go:117] "RemoveContainer" containerID="f5517d96ad29fe85711a180aba5b6f021eb10f1a490d37023adb94e2f7df145e" Jan 26 00:34:20 crc kubenswrapper[5107]: E0126 00:34:20.448959 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5517d96ad29fe85711a180aba5b6f021eb10f1a490d37023adb94e2f7df145e\": container with ID starting with f5517d96ad29fe85711a180aba5b6f021eb10f1a490d37023adb94e2f7df145e not found: ID does not exist" containerID="f5517d96ad29fe85711a180aba5b6f021eb10f1a490d37023adb94e2f7df145e" Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.448995 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5517d96ad29fe85711a180aba5b6f021eb10f1a490d37023adb94e2f7df145e"} err="failed to get container status \"f5517d96ad29fe85711a180aba5b6f021eb10f1a490d37023adb94e2f7df145e\": rpc error: code = NotFound desc = could not find container \"f5517d96ad29fe85711a180aba5b6f021eb10f1a490d37023adb94e2f7df145e\": container with ID starting with f5517d96ad29fe85711a180aba5b6f021eb10f1a490d37023adb94e2f7df145e not found: ID does not exist" Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.449018 5107 scope.go:117] "RemoveContainer" containerID="2ddd548ea3e92def2226a1210b5f2bfd67ad3b66dcd1952ed6944827ca219192" Jan 26 00:34:20 crc kubenswrapper[5107]: E0126 00:34:20.449307 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ddd548ea3e92def2226a1210b5f2bfd67ad3b66dcd1952ed6944827ca219192\": container with ID starting with 2ddd548ea3e92def2226a1210b5f2bfd67ad3b66dcd1952ed6944827ca219192 not found: ID does not exist" containerID="2ddd548ea3e92def2226a1210b5f2bfd67ad3b66dcd1952ed6944827ca219192" Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.449342 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ddd548ea3e92def2226a1210b5f2bfd67ad3b66dcd1952ed6944827ca219192"} err="failed to get container status \"2ddd548ea3e92def2226a1210b5f2bfd67ad3b66dcd1952ed6944827ca219192\": rpc error: code = NotFound desc = could not find container \"2ddd548ea3e92def2226a1210b5f2bfd67ad3b66dcd1952ed6944827ca219192\": container with ID starting with 2ddd548ea3e92def2226a1210b5f2bfd67ad3b66dcd1952ed6944827ca219192 not found: ID does not exist" Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.449357 5107 scope.go:117] "RemoveContainer" containerID="009d2f6fc60746bf7d3cf3ceb7bb4b9e8dd2d50fa7dcc0919c2652b4bf3ef811" Jan 26 00:34:20 crc kubenswrapper[5107]: E0126 00:34:20.449664 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"009d2f6fc60746bf7d3cf3ceb7bb4b9e8dd2d50fa7dcc0919c2652b4bf3ef811\": container with ID starting with 009d2f6fc60746bf7d3cf3ceb7bb4b9e8dd2d50fa7dcc0919c2652b4bf3ef811 not found: ID does not exist" containerID="009d2f6fc60746bf7d3cf3ceb7bb4b9e8dd2d50fa7dcc0919c2652b4bf3ef811" Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.449701 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"009d2f6fc60746bf7d3cf3ceb7bb4b9e8dd2d50fa7dcc0919c2652b4bf3ef811"} err="failed to get container status \"009d2f6fc60746bf7d3cf3ceb7bb4b9e8dd2d50fa7dcc0919c2652b4bf3ef811\": rpc error: code = NotFound desc = could not find container \"009d2f6fc60746bf7d3cf3ceb7bb4b9e8dd2d50fa7dcc0919c2652b4bf3ef811\": container with ID starting with 009d2f6fc60746bf7d3cf3ceb7bb4b9e8dd2d50fa7dcc0919c2652b4bf3ef811 not found: ID does not exist" Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.486870 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d1cbe9c-e1a9-4434-ab43-d90bbe701abc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.498212 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b9jbm" Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.588288 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvkmj\" (UniqueName: \"kubernetes.io/projected/9ddb99a4-59d7-46da-985f-45aea4e4c7a2-kube-api-access-lvkmj\") pod \"9ddb99a4-59d7-46da-985f-45aea4e4c7a2\" (UID: \"9ddb99a4-59d7-46da-985f-45aea4e4c7a2\") " Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.588541 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ddb99a4-59d7-46da-985f-45aea4e4c7a2-catalog-content\") pod \"9ddb99a4-59d7-46da-985f-45aea4e4c7a2\" (UID: \"9ddb99a4-59d7-46da-985f-45aea4e4c7a2\") " Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.588616 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ddb99a4-59d7-46da-985f-45aea4e4c7a2-utilities\") pod \"9ddb99a4-59d7-46da-985f-45aea4e4c7a2\" (UID: \"9ddb99a4-59d7-46da-985f-45aea4e4c7a2\") " Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.590175 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ddb99a4-59d7-46da-985f-45aea4e4c7a2-utilities" (OuterVolumeSpecName: "utilities") pod "9ddb99a4-59d7-46da-985f-45aea4e4c7a2" (UID: "9ddb99a4-59d7-46da-985f-45aea4e4c7a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.595954 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ddb99a4-59d7-46da-985f-45aea4e4c7a2-kube-api-access-lvkmj" (OuterVolumeSpecName: "kube-api-access-lvkmj") pod "9ddb99a4-59d7-46da-985f-45aea4e4c7a2" (UID: "9ddb99a4-59d7-46da-985f-45aea4e4c7a2"). InnerVolumeSpecName "kube-api-access-lvkmj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.640002 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ddb99a4-59d7-46da-985f-45aea4e4c7a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ddb99a4-59d7-46da-985f-45aea4e4c7a2" (UID: "9ddb99a4-59d7-46da-985f-45aea4e4c7a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.690269 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ddb99a4-59d7-46da-985f-45aea4e4c7a2-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.690328 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lvkmj\" (UniqueName: \"kubernetes.io/projected/9ddb99a4-59d7-46da-985f-45aea4e4c7a2-kube-api-access-lvkmj\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.690339 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ddb99a4-59d7-46da-985f-45aea4e4c7a2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.708689 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r6gf6"] Jan 26 00:34:20 crc kubenswrapper[5107]: I0126 00:34:20.717315 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r6gf6"] Jan 26 00:34:21 crc kubenswrapper[5107]: I0126 00:34:21.381085 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b9jbm" event={"ID":"9ddb99a4-59d7-46da-985f-45aea4e4c7a2","Type":"ContainerDied","Data":"0e8f34765f40c19272d969633ec83a984e0abf6778cda3b7b999b334e21d49e7"} Jan 26 00:34:21 crc kubenswrapper[5107]: I0126 00:34:21.381132 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b9jbm" Jan 26 00:34:21 crc kubenswrapper[5107]: I0126 00:34:21.381179 5107 scope.go:117] "RemoveContainer" containerID="d710ef11bf0f12b83b2d8916917852ce6193421964823a982b1e2fe5d93448b5" Jan 26 00:34:21 crc kubenswrapper[5107]: I0126 00:34:21.400988 5107 scope.go:117] "RemoveContainer" containerID="c3d5ea3f4d7ee89105f7dfb5db6098541b5a932107818a9fa4d5e8f9215ee9bf" Jan 26 00:34:21 crc kubenswrapper[5107]: I0126 00:34:21.423041 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b9jbm"] Jan 26 00:34:21 crc kubenswrapper[5107]: I0126 00:34:21.425561 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b9jbm"] Jan 26 00:34:21 crc kubenswrapper[5107]: I0126 00:34:21.452786 5107 scope.go:117] "RemoveContainer" containerID="3197e907c33c49dd7116e8c4944dc0c1b4143f46de94c93b01814a6a2192d689" Jan 26 00:34:22 crc kubenswrapper[5107]: I0126 00:34:22.121410 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d1cbe9c-e1a9-4434-ab43-d90bbe701abc" path="/var/lib/kubelet/pods/7d1cbe9c-e1a9-4434-ab43-d90bbe701abc/volumes" Jan 26 00:34:22 crc kubenswrapper[5107]: I0126 00:34:22.122097 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ddb99a4-59d7-46da-985f-45aea4e4c7a2" path="/var/lib/kubelet/pods/9ddb99a4-59d7-46da-985f-45aea4e4c7a2/volumes" Jan 26 00:34:30 crc kubenswrapper[5107]: I0126 00:34:30.723840 5107 patch_prober.go:28] interesting pod/machine-config-daemon-94c4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:34:30 crc kubenswrapper[5107]: I0126 00:34:30.724404 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:34:42 crc kubenswrapper[5107]: I0126 00:34:42.335113 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wzwmc/must-gather-86cgn"] Jan 26 00:34:42 crc kubenswrapper[5107]: I0126 00:34:42.336787 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9ddb99a4-59d7-46da-985f-45aea4e4c7a2" containerName="extract-content" Jan 26 00:34:42 crc kubenswrapper[5107]: I0126 00:34:42.336805 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ddb99a4-59d7-46da-985f-45aea4e4c7a2" containerName="extract-content" Jan 26 00:34:42 crc kubenswrapper[5107]: I0126 00:34:42.336828 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9ddb99a4-59d7-46da-985f-45aea4e4c7a2" containerName="registry-server" Jan 26 00:34:42 crc kubenswrapper[5107]: I0126 00:34:42.336838 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ddb99a4-59d7-46da-985f-45aea4e4c7a2" containerName="registry-server" Jan 26 00:34:42 crc kubenswrapper[5107]: I0126 00:34:42.336856 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9ddb99a4-59d7-46da-985f-45aea4e4c7a2" containerName="extract-utilities" Jan 26 00:34:42 crc kubenswrapper[5107]: I0126 00:34:42.336866 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ddb99a4-59d7-46da-985f-45aea4e4c7a2" containerName="extract-utilities" Jan 26 00:34:42 crc kubenswrapper[5107]: I0126 00:34:42.336958 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7d1cbe9c-e1a9-4434-ab43-d90bbe701abc" containerName="registry-server" Jan 26 00:34:42 crc kubenswrapper[5107]: I0126 00:34:42.336969 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d1cbe9c-e1a9-4434-ab43-d90bbe701abc" containerName="registry-server" Jan 26 00:34:42 crc kubenswrapper[5107]: I0126 00:34:42.336987 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7d1cbe9c-e1a9-4434-ab43-d90bbe701abc" containerName="extract-content" Jan 26 00:34:42 crc kubenswrapper[5107]: I0126 00:34:42.336995 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d1cbe9c-e1a9-4434-ab43-d90bbe701abc" containerName="extract-content" Jan 26 00:34:42 crc kubenswrapper[5107]: I0126 00:34:42.337014 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7d1cbe9c-e1a9-4434-ab43-d90bbe701abc" containerName="extract-utilities" Jan 26 00:34:42 crc kubenswrapper[5107]: I0126 00:34:42.337021 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d1cbe9c-e1a9-4434-ab43-d90bbe701abc" containerName="extract-utilities" Jan 26 00:34:42 crc kubenswrapper[5107]: I0126 00:34:42.337162 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="7d1cbe9c-e1a9-4434-ab43-d90bbe701abc" containerName="registry-server" Jan 26 00:34:42 crc kubenswrapper[5107]: I0126 00:34:42.337185 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="9ddb99a4-59d7-46da-985f-45aea4e4c7a2" containerName="registry-server" Jan 26 00:34:42 crc kubenswrapper[5107]: I0126 00:34:42.344125 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wzwmc/must-gather-86cgn" Jan 26 00:34:42 crc kubenswrapper[5107]: I0126 00:34:42.349940 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wzwmc/must-gather-86cgn"] Jan 26 00:34:42 crc kubenswrapper[5107]: I0126 00:34:42.356937 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-wzwmc\"/\"openshift-service-ca.crt\"" Jan 26 00:34:42 crc kubenswrapper[5107]: I0126 00:34:42.357340 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-wzwmc\"/\"kube-root-ca.crt\"" Jan 26 00:34:42 crc kubenswrapper[5107]: I0126 00:34:42.364274 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-wzwmc\"/\"default-dockercfg-zqqv8\"" Jan 26 00:34:42 crc kubenswrapper[5107]: I0126 00:34:42.449456 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g54m\" (UniqueName: \"kubernetes.io/projected/ca406593-5197-475d-afd4-32f36cff5a44-kube-api-access-7g54m\") pod \"must-gather-86cgn\" (UID: \"ca406593-5197-475d-afd4-32f36cff5a44\") " pod="openshift-must-gather-wzwmc/must-gather-86cgn" Jan 26 00:34:42 crc kubenswrapper[5107]: I0126 00:34:42.449542 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ca406593-5197-475d-afd4-32f36cff5a44-must-gather-output\") pod \"must-gather-86cgn\" (UID: \"ca406593-5197-475d-afd4-32f36cff5a44\") " pod="openshift-must-gather-wzwmc/must-gather-86cgn" Jan 26 00:34:42 crc kubenswrapper[5107]: I0126 00:34:42.551068 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ca406593-5197-475d-afd4-32f36cff5a44-must-gather-output\") pod \"must-gather-86cgn\" (UID: \"ca406593-5197-475d-afd4-32f36cff5a44\") " pod="openshift-must-gather-wzwmc/must-gather-86cgn" Jan 26 00:34:42 crc kubenswrapper[5107]: I0126 00:34:42.551251 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7g54m\" (UniqueName: \"kubernetes.io/projected/ca406593-5197-475d-afd4-32f36cff5a44-kube-api-access-7g54m\") pod \"must-gather-86cgn\" (UID: \"ca406593-5197-475d-afd4-32f36cff5a44\") " pod="openshift-must-gather-wzwmc/must-gather-86cgn" Jan 26 00:34:42 crc kubenswrapper[5107]: I0126 00:34:42.551745 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ca406593-5197-475d-afd4-32f36cff5a44-must-gather-output\") pod \"must-gather-86cgn\" (UID: \"ca406593-5197-475d-afd4-32f36cff5a44\") " pod="openshift-must-gather-wzwmc/must-gather-86cgn" Jan 26 00:34:42 crc kubenswrapper[5107]: I0126 00:34:42.584588 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g54m\" (UniqueName: \"kubernetes.io/projected/ca406593-5197-475d-afd4-32f36cff5a44-kube-api-access-7g54m\") pod \"must-gather-86cgn\" (UID: \"ca406593-5197-475d-afd4-32f36cff5a44\") " pod="openshift-must-gather-wzwmc/must-gather-86cgn" Jan 26 00:34:42 crc kubenswrapper[5107]: I0126 00:34:42.663801 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wzwmc/must-gather-86cgn" Jan 26 00:34:42 crc kubenswrapper[5107]: I0126 00:34:42.902764 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wzwmc/must-gather-86cgn"] Jan 26 00:34:43 crc kubenswrapper[5107]: I0126 00:34:43.578655 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wzwmc/must-gather-86cgn" event={"ID":"ca406593-5197-475d-afd4-32f36cff5a44","Type":"ContainerStarted","Data":"d360bbbc78093324ddebbe21f3d37d6e2dc8104f87208dfc2e41d117d3d7ea15"} Jan 26 00:34:49 crc kubenswrapper[5107]: I0126 00:34:49.650438 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wzwmc/must-gather-86cgn" event={"ID":"ca406593-5197-475d-afd4-32f36cff5a44","Type":"ContainerStarted","Data":"37c0a9505d9eb8bf6cc7fc604d257200a9a09bf2003e44637f71a3ef3aa2d527"} Jan 26 00:34:49 crc kubenswrapper[5107]: I0126 00:34:49.651180 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wzwmc/must-gather-86cgn" event={"ID":"ca406593-5197-475d-afd4-32f36cff5a44","Type":"ContainerStarted","Data":"d2c8f0179f1d4b6b9796fc2898dafcdee049adc30b45af6f9aa230a820744dd7"} Jan 26 00:35:00 crc kubenswrapper[5107]: I0126 00:35:00.723423 5107 patch_prober.go:28] interesting pod/machine-config-daemon-94c4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:35:00 crc kubenswrapper[5107]: I0126 00:35:00.724373 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:35:30 crc kubenswrapper[5107]: I0126 00:35:30.723666 5107 patch_prober.go:28] interesting pod/machine-config-daemon-94c4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:35:30 crc kubenswrapper[5107]: I0126 00:35:30.724420 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:35:30 crc kubenswrapper[5107]: I0126 00:35:30.724494 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" Jan 26 00:35:30 crc kubenswrapper[5107]: I0126 00:35:30.725222 5107 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"68a6136f0fb49dc0b05970bd45082071005debf7e7794eeb59067e6ae923b996"} pod="openshift-machine-config-operator/machine-config-daemon-94c4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:35:30 crc kubenswrapper[5107]: I0126 00:35:30.725292 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" containerID="cri-o://68a6136f0fb49dc0b05970bd45082071005debf7e7794eeb59067e6ae923b996" gracePeriod=600 Jan 26 00:35:30 crc kubenswrapper[5107]: I0126 00:35:30.972471 5107 generic.go:358] "Generic (PLEG): container finished" podID="7d907601-1852-43f9-8a70-ef4e71351e81" containerID="68a6136f0fb49dc0b05970bd45082071005debf7e7794eeb59067e6ae923b996" exitCode=0 Jan 26 00:35:30 crc kubenswrapper[5107]: I0126 00:35:30.972553 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" event={"ID":"7d907601-1852-43f9-8a70-ef4e71351e81","Type":"ContainerDied","Data":"68a6136f0fb49dc0b05970bd45082071005debf7e7794eeb59067e6ae923b996"} Jan 26 00:35:30 crc kubenswrapper[5107]: I0126 00:35:30.972912 5107 scope.go:117] "RemoveContainer" containerID="482044e2b3d805fd888f02ddc223f22c33448ddc500cab5ae44472e3724cc425" Jan 26 00:35:31 crc kubenswrapper[5107]: I0126 00:35:31.986821 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" event={"ID":"7d907601-1852-43f9-8a70-ef4e71351e81","Type":"ContainerStarted","Data":"2e6bc460ea0d650ceadcf3566ed26f6cdb646a7d2473d1a43396fe063f532da3"} Jan 26 00:35:32 crc kubenswrapper[5107]: I0126 00:35:32.009222 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wzwmc/must-gather-86cgn" podStartSLOduration=44.102432732 podStartE2EDuration="50.009198932s" podCreationTimestamp="2026-01-26 00:34:42 +0000 UTC" firstStartedPulling="2026-01-26 00:34:42.9112579 +0000 UTC m=+1527.828852246" lastFinishedPulling="2026-01-26 00:34:48.8180241 +0000 UTC m=+1533.735618446" observedRunningTime="2026-01-26 00:34:49.678685045 +0000 UTC m=+1534.596279401" watchObservedRunningTime="2026-01-26 00:35:32.009198932 +0000 UTC m=+1576.926793278" Jan 26 00:35:32 crc kubenswrapper[5107]: I0126 00:35:32.792769 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-lckdk_ea19e3ee-138c-4fc9-aa7f-c2c7747b3468/control-plane-machine-set-operator/0.log" Jan 26 00:35:32 crc kubenswrapper[5107]: I0126 00:35:32.928109 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-lpd5s_f147b0c8-28b8-4818-a30c-f6aa0da709db/kube-rbac-proxy/0.log" Jan 26 00:35:32 crc kubenswrapper[5107]: I0126 00:35:32.973503 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-lpd5s_f147b0c8-28b8-4818-a30c-f6aa0da709db/machine-api-operator/0.log" Jan 26 00:35:44 crc kubenswrapper[5107]: I0126 00:35:44.228770 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d6zsd"] Jan 26 00:35:44 crc kubenswrapper[5107]: I0126 00:35:44.351562 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d6zsd"] Jan 26 00:35:44 crc kubenswrapper[5107]: I0126 00:35:44.351799 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d6zsd" Jan 26 00:35:44 crc kubenswrapper[5107]: I0126 00:35:44.423607 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn8wf\" (UniqueName: \"kubernetes.io/projected/1cb8553b-431b-4517-beba-800a138d1d51-kube-api-access-kn8wf\") pod \"community-operators-d6zsd\" (UID: \"1cb8553b-431b-4517-beba-800a138d1d51\") " pod="openshift-marketplace/community-operators-d6zsd" Jan 26 00:35:44 crc kubenswrapper[5107]: I0126 00:35:44.423706 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cb8553b-431b-4517-beba-800a138d1d51-utilities\") pod \"community-operators-d6zsd\" (UID: \"1cb8553b-431b-4517-beba-800a138d1d51\") " pod="openshift-marketplace/community-operators-d6zsd" Jan 26 00:35:44 crc kubenswrapper[5107]: I0126 00:35:44.423863 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cb8553b-431b-4517-beba-800a138d1d51-catalog-content\") pod \"community-operators-d6zsd\" (UID: \"1cb8553b-431b-4517-beba-800a138d1d51\") " pod="openshift-marketplace/community-operators-d6zsd" Jan 26 00:35:44 crc kubenswrapper[5107]: I0126 00:35:44.525772 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kn8wf\" (UniqueName: \"kubernetes.io/projected/1cb8553b-431b-4517-beba-800a138d1d51-kube-api-access-kn8wf\") pod \"community-operators-d6zsd\" (UID: \"1cb8553b-431b-4517-beba-800a138d1d51\") " pod="openshift-marketplace/community-operators-d6zsd" Jan 26 00:35:44 crc kubenswrapper[5107]: I0126 00:35:44.525845 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cb8553b-431b-4517-beba-800a138d1d51-utilities\") pod \"community-operators-d6zsd\" (UID: \"1cb8553b-431b-4517-beba-800a138d1d51\") " pod="openshift-marketplace/community-operators-d6zsd" Jan 26 00:35:44 crc kubenswrapper[5107]: I0126 00:35:44.526151 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cb8553b-431b-4517-beba-800a138d1d51-catalog-content\") pod \"community-operators-d6zsd\" (UID: \"1cb8553b-431b-4517-beba-800a138d1d51\") " pod="openshift-marketplace/community-operators-d6zsd" Jan 26 00:35:44 crc kubenswrapper[5107]: I0126 00:35:44.526658 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cb8553b-431b-4517-beba-800a138d1d51-catalog-content\") pod \"community-operators-d6zsd\" (UID: \"1cb8553b-431b-4517-beba-800a138d1d51\") " pod="openshift-marketplace/community-operators-d6zsd" Jan 26 00:35:44 crc kubenswrapper[5107]: I0126 00:35:44.526943 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cb8553b-431b-4517-beba-800a138d1d51-utilities\") pod \"community-operators-d6zsd\" (UID: \"1cb8553b-431b-4517-beba-800a138d1d51\") " pod="openshift-marketplace/community-operators-d6zsd" Jan 26 00:35:44 crc kubenswrapper[5107]: I0126 00:35:44.552520 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn8wf\" (UniqueName: \"kubernetes.io/projected/1cb8553b-431b-4517-beba-800a138d1d51-kube-api-access-kn8wf\") pod \"community-operators-d6zsd\" (UID: \"1cb8553b-431b-4517-beba-800a138d1d51\") " pod="openshift-marketplace/community-operators-d6zsd" Jan 26 00:35:44 crc kubenswrapper[5107]: I0126 00:35:44.672920 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d6zsd" Jan 26 00:35:45 crc kubenswrapper[5107]: I0126 00:35:45.255816 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d6zsd"] Jan 26 00:35:45 crc kubenswrapper[5107]: W0126 00:35:45.262286 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cb8553b_431b_4517_beba_800a138d1d51.slice/crio-05c342d40e568d1c2fe5defa99ef6db5f673c0dbe0ede8206fcc9c91d452bcbd WatchSource:0}: Error finding container 05c342d40e568d1c2fe5defa99ef6db5f673c0dbe0ede8206fcc9c91d452bcbd: Status 404 returned error can't find the container with id 05c342d40e568d1c2fe5defa99ef6db5f673c0dbe0ede8206fcc9c91d452bcbd Jan 26 00:35:45 crc kubenswrapper[5107]: I0126 00:35:45.901729 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858d87f86b-d2ls2_1e19fa3e-ab08-4129-9714-1ba2e512aa68/cert-manager-controller/0.log" Jan 26 00:35:46 crc kubenswrapper[5107]: I0126 00:35:46.068517 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7dbf76d5c8-hd5l4_10f941c4-c1f6-4bf3-9b29-581e7a206ef8/cert-manager-cainjector/0.log" Jan 26 00:35:46 crc kubenswrapper[5107]: I0126 00:35:46.106303 5107 generic.go:358] "Generic (PLEG): container finished" podID="1cb8553b-431b-4517-beba-800a138d1d51" containerID="01a09dcb2a68db037d120c6d7b06ecbd08daeb701482b867c85977b6057506e0" exitCode=0 Jan 26 00:35:46 crc kubenswrapper[5107]: I0126 00:35:46.106449 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6zsd" event={"ID":"1cb8553b-431b-4517-beba-800a138d1d51","Type":"ContainerDied","Data":"01a09dcb2a68db037d120c6d7b06ecbd08daeb701482b867c85977b6057506e0"} Jan 26 00:35:46 crc kubenswrapper[5107]: I0126 00:35:46.106512 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6zsd" event={"ID":"1cb8553b-431b-4517-beba-800a138d1d51","Type":"ContainerStarted","Data":"05c342d40e568d1c2fe5defa99ef6db5f673c0dbe0ede8206fcc9c91d452bcbd"} Jan 26 00:35:46 crc kubenswrapper[5107]: I0126 00:35:46.124285 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-7894b5b9b4-j6gmr_6097a808-7fcb-4512-9054-3de1585157e7/cert-manager-webhook/0.log" Jan 26 00:35:47 crc kubenswrapper[5107]: I0126 00:35:47.116919 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6zsd" event={"ID":"1cb8553b-431b-4517-beba-800a138d1d51","Type":"ContainerStarted","Data":"b9a2137b152b7822f81b244675242313b550113b6d241ee49724c11198db652f"} Jan 26 00:35:48 crc kubenswrapper[5107]: I0126 00:35:48.125456 5107 generic.go:358] "Generic (PLEG): container finished" podID="1cb8553b-431b-4517-beba-800a138d1d51" containerID="b9a2137b152b7822f81b244675242313b550113b6d241ee49724c11198db652f" exitCode=0 Jan 26 00:35:48 crc kubenswrapper[5107]: I0126 00:35:48.126972 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6zsd" event={"ID":"1cb8553b-431b-4517-beba-800a138d1d51","Type":"ContainerDied","Data":"b9a2137b152b7822f81b244675242313b550113b6d241ee49724c11198db652f"} Jan 26 00:35:49 crc kubenswrapper[5107]: I0126 00:35:49.141948 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6zsd" event={"ID":"1cb8553b-431b-4517-beba-800a138d1d51","Type":"ContainerStarted","Data":"eddcea8010df46ef97dabba95d5cf3378778cb575b89011a183b0a27115b5173"} Jan 26 00:35:54 crc kubenswrapper[5107]: I0126 00:35:54.673187 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d6zsd" Jan 26 00:35:54 crc kubenswrapper[5107]: I0126 00:35:54.673940 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-d6zsd" Jan 26 00:35:54 crc kubenswrapper[5107]: I0126 00:35:54.718764 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d6zsd" Jan 26 00:35:54 crc kubenswrapper[5107]: I0126 00:35:54.742203 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d6zsd" podStartSLOduration=9.957345183 podStartE2EDuration="10.742176885s" podCreationTimestamp="2026-01-26 00:35:44 +0000 UTC" firstStartedPulling="2026-01-26 00:35:46.107869237 +0000 UTC m=+1591.025463583" lastFinishedPulling="2026-01-26 00:35:46.892700939 +0000 UTC m=+1591.810295285" observedRunningTime="2026-01-26 00:35:49.168507048 +0000 UTC m=+1594.086101394" watchObservedRunningTime="2026-01-26 00:35:54.742176885 +0000 UTC m=+1599.659771241" Jan 26 00:35:55 crc kubenswrapper[5107]: I0126 00:35:55.234269 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d6zsd" Jan 26 00:35:55 crc kubenswrapper[5107]: I0126 00:35:55.283821 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d6zsd"] Jan 26 00:35:57 crc kubenswrapper[5107]: I0126 00:35:57.244444 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d6zsd" podUID="1cb8553b-431b-4517-beba-800a138d1d51" containerName="registry-server" containerID="cri-o://eddcea8010df46ef97dabba95d5cf3378778cb575b89011a183b0a27115b5173" gracePeriod=2 Jan 26 00:35:57 crc kubenswrapper[5107]: I0126 00:35:57.648364 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d6zsd" Jan 26 00:35:57 crc kubenswrapper[5107]: I0126 00:35:57.845761 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cb8553b-431b-4517-beba-800a138d1d51-utilities\") pod \"1cb8553b-431b-4517-beba-800a138d1d51\" (UID: \"1cb8553b-431b-4517-beba-800a138d1d51\") " Jan 26 00:35:57 crc kubenswrapper[5107]: I0126 00:35:57.845860 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn8wf\" (UniqueName: \"kubernetes.io/projected/1cb8553b-431b-4517-beba-800a138d1d51-kube-api-access-kn8wf\") pod \"1cb8553b-431b-4517-beba-800a138d1d51\" (UID: \"1cb8553b-431b-4517-beba-800a138d1d51\") " Jan 26 00:35:57 crc kubenswrapper[5107]: I0126 00:35:57.846105 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cb8553b-431b-4517-beba-800a138d1d51-catalog-content\") pod \"1cb8553b-431b-4517-beba-800a138d1d51\" (UID: \"1cb8553b-431b-4517-beba-800a138d1d51\") " Jan 26 00:35:57 crc kubenswrapper[5107]: I0126 00:35:57.847205 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cb8553b-431b-4517-beba-800a138d1d51-utilities" (OuterVolumeSpecName: "utilities") pod "1cb8553b-431b-4517-beba-800a138d1d51" (UID: "1cb8553b-431b-4517-beba-800a138d1d51"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:35:57 crc kubenswrapper[5107]: I0126 00:35:57.857955 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cb8553b-431b-4517-beba-800a138d1d51-kube-api-access-kn8wf" (OuterVolumeSpecName: "kube-api-access-kn8wf") pod "1cb8553b-431b-4517-beba-800a138d1d51" (UID: "1cb8553b-431b-4517-beba-800a138d1d51"). InnerVolumeSpecName "kube-api-access-kn8wf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:35:57 crc kubenswrapper[5107]: I0126 00:35:57.948183 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cb8553b-431b-4517-beba-800a138d1d51-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:35:57 crc kubenswrapper[5107]: I0126 00:35:57.948215 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kn8wf\" (UniqueName: \"kubernetes.io/projected/1cb8553b-431b-4517-beba-800a138d1d51-kube-api-access-kn8wf\") on node \"crc\" DevicePath \"\"" Jan 26 00:35:57 crc kubenswrapper[5107]: I0126 00:35:57.975519 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cb8553b-431b-4517-beba-800a138d1d51-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1cb8553b-431b-4517-beba-800a138d1d51" (UID: "1cb8553b-431b-4517-beba-800a138d1d51"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:35:58 crc kubenswrapper[5107]: I0126 00:35:58.049641 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cb8553b-431b-4517-beba-800a138d1d51-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:35:58 crc kubenswrapper[5107]: I0126 00:35:58.262488 5107 generic.go:358] "Generic (PLEG): container finished" podID="1cb8553b-431b-4517-beba-800a138d1d51" containerID="eddcea8010df46ef97dabba95d5cf3378778cb575b89011a183b0a27115b5173" exitCode=0 Jan 26 00:35:58 crc kubenswrapper[5107]: I0126 00:35:58.262583 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6zsd" event={"ID":"1cb8553b-431b-4517-beba-800a138d1d51","Type":"ContainerDied","Data":"eddcea8010df46ef97dabba95d5cf3378778cb575b89011a183b0a27115b5173"} Jan 26 00:35:58 crc kubenswrapper[5107]: I0126 00:35:58.262650 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d6zsd" Jan 26 00:35:58 crc kubenswrapper[5107]: I0126 00:35:58.262681 5107 scope.go:117] "RemoveContainer" containerID="eddcea8010df46ef97dabba95d5cf3378778cb575b89011a183b0a27115b5173" Jan 26 00:35:58 crc kubenswrapper[5107]: I0126 00:35:58.262660 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d6zsd" event={"ID":"1cb8553b-431b-4517-beba-800a138d1d51","Type":"ContainerDied","Data":"05c342d40e568d1c2fe5defa99ef6db5f673c0dbe0ede8206fcc9c91d452bcbd"} Jan 26 00:35:58 crc kubenswrapper[5107]: I0126 00:35:58.284533 5107 scope.go:117] "RemoveContainer" containerID="b9a2137b152b7822f81b244675242313b550113b6d241ee49724c11198db652f" Jan 26 00:35:58 crc kubenswrapper[5107]: I0126 00:35:58.311543 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d6zsd"] Jan 26 00:35:58 crc kubenswrapper[5107]: I0126 00:35:58.318762 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d6zsd"] Jan 26 00:35:58 crc kubenswrapper[5107]: I0126 00:35:58.328547 5107 scope.go:117] "RemoveContainer" containerID="01a09dcb2a68db037d120c6d7b06ecbd08daeb701482b867c85977b6057506e0" Jan 26 00:35:58 crc kubenswrapper[5107]: I0126 00:35:58.347269 5107 scope.go:117] "RemoveContainer" containerID="eddcea8010df46ef97dabba95d5cf3378778cb575b89011a183b0a27115b5173" Jan 26 00:35:58 crc kubenswrapper[5107]: E0126 00:35:58.347756 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eddcea8010df46ef97dabba95d5cf3378778cb575b89011a183b0a27115b5173\": container with ID starting with eddcea8010df46ef97dabba95d5cf3378778cb575b89011a183b0a27115b5173 not found: ID does not exist" containerID="eddcea8010df46ef97dabba95d5cf3378778cb575b89011a183b0a27115b5173" Jan 26 00:35:58 crc kubenswrapper[5107]: I0126 00:35:58.347800 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eddcea8010df46ef97dabba95d5cf3378778cb575b89011a183b0a27115b5173"} err="failed to get container status \"eddcea8010df46ef97dabba95d5cf3378778cb575b89011a183b0a27115b5173\": rpc error: code = NotFound desc = could not find container \"eddcea8010df46ef97dabba95d5cf3378778cb575b89011a183b0a27115b5173\": container with ID starting with eddcea8010df46ef97dabba95d5cf3378778cb575b89011a183b0a27115b5173 not found: ID does not exist" Jan 26 00:35:58 crc kubenswrapper[5107]: I0126 00:35:58.347824 5107 scope.go:117] "RemoveContainer" containerID="b9a2137b152b7822f81b244675242313b550113b6d241ee49724c11198db652f" Jan 26 00:35:58 crc kubenswrapper[5107]: E0126 00:35:58.348338 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9a2137b152b7822f81b244675242313b550113b6d241ee49724c11198db652f\": container with ID starting with b9a2137b152b7822f81b244675242313b550113b6d241ee49724c11198db652f not found: ID does not exist" containerID="b9a2137b152b7822f81b244675242313b550113b6d241ee49724c11198db652f" Jan 26 00:35:58 crc kubenswrapper[5107]: I0126 00:35:58.348382 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9a2137b152b7822f81b244675242313b550113b6d241ee49724c11198db652f"} err="failed to get container status \"b9a2137b152b7822f81b244675242313b550113b6d241ee49724c11198db652f\": rpc error: code = NotFound desc = could not find container \"b9a2137b152b7822f81b244675242313b550113b6d241ee49724c11198db652f\": container with ID starting with b9a2137b152b7822f81b244675242313b550113b6d241ee49724c11198db652f not found: ID does not exist" Jan 26 00:35:58 crc kubenswrapper[5107]: I0126 00:35:58.348414 5107 scope.go:117] "RemoveContainer" containerID="01a09dcb2a68db037d120c6d7b06ecbd08daeb701482b867c85977b6057506e0" Jan 26 00:35:58 crc kubenswrapper[5107]: E0126 00:35:58.348688 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01a09dcb2a68db037d120c6d7b06ecbd08daeb701482b867c85977b6057506e0\": container with ID starting with 01a09dcb2a68db037d120c6d7b06ecbd08daeb701482b867c85977b6057506e0 not found: ID does not exist" containerID="01a09dcb2a68db037d120c6d7b06ecbd08daeb701482b867c85977b6057506e0" Jan 26 00:35:58 crc kubenswrapper[5107]: I0126 00:35:58.348718 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01a09dcb2a68db037d120c6d7b06ecbd08daeb701482b867c85977b6057506e0"} err="failed to get container status \"01a09dcb2a68db037d120c6d7b06ecbd08daeb701482b867c85977b6057506e0\": rpc error: code = NotFound desc = could not find container \"01a09dcb2a68db037d120c6d7b06ecbd08daeb701482b867c85977b6057506e0\": container with ID starting with 01a09dcb2a68db037d120c6d7b06ecbd08daeb701482b867c85977b6057506e0 not found: ID does not exist" Jan 26 00:36:00 crc kubenswrapper[5107]: I0126 00:36:00.123422 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cb8553b-431b-4517-beba-800a138d1d51" path="/var/lib/kubelet/pods/1cb8553b-431b-4517-beba-800a138d1d51/volumes" Jan 26 00:36:00 crc kubenswrapper[5107]: I0126 00:36:00.137699 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489796-v4f9j"] Jan 26 00:36:00 crc kubenswrapper[5107]: I0126 00:36:00.138544 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1cb8553b-431b-4517-beba-800a138d1d51" containerName="extract-utilities" Jan 26 00:36:00 crc kubenswrapper[5107]: I0126 00:36:00.138568 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb8553b-431b-4517-beba-800a138d1d51" containerName="extract-utilities" Jan 26 00:36:00 crc kubenswrapper[5107]: I0126 00:36:00.138623 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1cb8553b-431b-4517-beba-800a138d1d51" containerName="extract-content" Jan 26 00:36:00 crc kubenswrapper[5107]: I0126 00:36:00.138633 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb8553b-431b-4517-beba-800a138d1d51" containerName="extract-content" Jan 26 00:36:00 crc kubenswrapper[5107]: I0126 00:36:00.138647 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1cb8553b-431b-4517-beba-800a138d1d51" containerName="registry-server" Jan 26 00:36:00 crc kubenswrapper[5107]: I0126 00:36:00.138655 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb8553b-431b-4517-beba-800a138d1d51" containerName="registry-server" Jan 26 00:36:00 crc kubenswrapper[5107]: I0126 00:36:00.138768 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="1cb8553b-431b-4517-beba-800a138d1d51" containerName="registry-server" Jan 26 00:36:00 crc kubenswrapper[5107]: I0126 00:36:00.252937 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489796-v4f9j"] Jan 26 00:36:00 crc kubenswrapper[5107]: I0126 00:36:00.253093 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489796-v4f9j" Jan 26 00:36:00 crc kubenswrapper[5107]: I0126 00:36:00.255352 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:36:00 crc kubenswrapper[5107]: I0126 00:36:00.256851 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:36:00 crc kubenswrapper[5107]: I0126 00:36:00.257075 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-96gbq\"" Jan 26 00:36:00 crc kubenswrapper[5107]: I0126 00:36:00.348387 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrjht\" (UniqueName: \"kubernetes.io/projected/ea598b86-0f14-44bc-ae03-80bfaf06e0a6-kube-api-access-wrjht\") pod \"auto-csr-approver-29489796-v4f9j\" (UID: \"ea598b86-0f14-44bc-ae03-80bfaf06e0a6\") " pod="openshift-infra/auto-csr-approver-29489796-v4f9j" Jan 26 00:36:00 crc kubenswrapper[5107]: I0126 00:36:00.450747 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wrjht\" (UniqueName: \"kubernetes.io/projected/ea598b86-0f14-44bc-ae03-80bfaf06e0a6-kube-api-access-wrjht\") pod \"auto-csr-approver-29489796-v4f9j\" (UID: \"ea598b86-0f14-44bc-ae03-80bfaf06e0a6\") " pod="openshift-infra/auto-csr-approver-29489796-v4f9j" Jan 26 00:36:00 crc kubenswrapper[5107]: I0126 00:36:00.567208 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrjht\" (UniqueName: \"kubernetes.io/projected/ea598b86-0f14-44bc-ae03-80bfaf06e0a6-kube-api-access-wrjht\") pod \"auto-csr-approver-29489796-v4f9j\" (UID: \"ea598b86-0f14-44bc-ae03-80bfaf06e0a6\") " pod="openshift-infra/auto-csr-approver-29489796-v4f9j" Jan 26 00:36:00 crc kubenswrapper[5107]: I0126 00:36:00.574482 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489796-v4f9j" Jan 26 00:36:00 crc kubenswrapper[5107]: I0126 00:36:00.874217 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489796-v4f9j"] Jan 26 00:36:00 crc kubenswrapper[5107]: I0126 00:36:00.894303 5107 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 00:36:01 crc kubenswrapper[5107]: I0126 00:36:01.368136 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489796-v4f9j" event={"ID":"ea598b86-0f14-44bc-ae03-80bfaf06e0a6","Type":"ContainerStarted","Data":"be1e67e1a143cf7a143402af7b9fb80d3d8deb207eccb466cc70a8fa9abe9826"} Jan 26 00:36:02 crc kubenswrapper[5107]: I0126 00:36:02.236188 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-xpxw5_123740f4-e15a-41f1-a226-52d4c99d5b2c/prometheus-operator/0.log" Jan 26 00:36:02 crc kubenswrapper[5107]: I0126 00:36:02.343042 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5fb8f8d664-d2thm_c77a1404-d97a-4b52-9272-21ff7b6fe4f7/prometheus-operator-admission-webhook/0.log" Jan 26 00:36:02 crc kubenswrapper[5107]: I0126 00:36:02.403679 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5fb8f8d664-xdd8z_10bc00e1-36a9-4698-a7d5-8d1378427b9e/prometheus-operator-admission-webhook/0.log" Jan 26 00:36:02 crc kubenswrapper[5107]: I0126 00:36:02.553992 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-7ml9v_d664f8f1-6e8c-4763-b2e5-3ce3cda11786/operator/0.log" Jan 26 00:36:02 crc kubenswrapper[5107]: I0126 00:36:02.631768 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-dgrdx_483aa877-5602-47cb-ba02-45775e6d5cd7/perses-operator/0.log" Jan 26 00:36:03 crc kubenswrapper[5107]: I0126 00:36:03.394580 5107 generic.go:358] "Generic (PLEG): container finished" podID="ea598b86-0f14-44bc-ae03-80bfaf06e0a6" containerID="8d5aebf596a35faefdfb0720dd34619229966c7c144c08980728748da5f251cc" exitCode=0 Jan 26 00:36:03 crc kubenswrapper[5107]: I0126 00:36:03.394687 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489796-v4f9j" event={"ID":"ea598b86-0f14-44bc-ae03-80bfaf06e0a6","Type":"ContainerDied","Data":"8d5aebf596a35faefdfb0720dd34619229966c7c144c08980728748da5f251cc"} Jan 26 00:36:04 crc kubenswrapper[5107]: I0126 00:36:04.694202 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489796-v4f9j" Jan 26 00:36:04 crc kubenswrapper[5107]: I0126 00:36:04.726492 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrjht\" (UniqueName: \"kubernetes.io/projected/ea598b86-0f14-44bc-ae03-80bfaf06e0a6-kube-api-access-wrjht\") pod \"ea598b86-0f14-44bc-ae03-80bfaf06e0a6\" (UID: \"ea598b86-0f14-44bc-ae03-80bfaf06e0a6\") " Jan 26 00:36:04 crc kubenswrapper[5107]: I0126 00:36:04.738335 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea598b86-0f14-44bc-ae03-80bfaf06e0a6-kube-api-access-wrjht" (OuterVolumeSpecName: "kube-api-access-wrjht") pod "ea598b86-0f14-44bc-ae03-80bfaf06e0a6" (UID: "ea598b86-0f14-44bc-ae03-80bfaf06e0a6"). InnerVolumeSpecName "kube-api-access-wrjht". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:36:04 crc kubenswrapper[5107]: I0126 00:36:04.828526 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wrjht\" (UniqueName: \"kubernetes.io/projected/ea598b86-0f14-44bc-ae03-80bfaf06e0a6-kube-api-access-wrjht\") on node \"crc\" DevicePath \"\"" Jan 26 00:36:05 crc kubenswrapper[5107]: I0126 00:36:05.454406 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489796-v4f9j" event={"ID":"ea598b86-0f14-44bc-ae03-80bfaf06e0a6","Type":"ContainerDied","Data":"be1e67e1a143cf7a143402af7b9fb80d3d8deb207eccb466cc70a8fa9abe9826"} Jan 26 00:36:05 crc kubenswrapper[5107]: I0126 00:36:05.454489 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be1e67e1a143cf7a143402af7b9fb80d3d8deb207eccb466cc70a8fa9abe9826" Jan 26 00:36:05 crc kubenswrapper[5107]: I0126 00:36:05.454434 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489796-v4f9j" Jan 26 00:36:05 crc kubenswrapper[5107]: I0126 00:36:05.763671 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489790-s6zbp"] Jan 26 00:36:05 crc kubenswrapper[5107]: I0126 00:36:05.769154 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489790-s6zbp"] Jan 26 00:36:06 crc kubenswrapper[5107]: I0126 00:36:06.121813 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3623094c-365d-4e73-a6e4-f7a89846508b" path="/var/lib/kubelet/pods/3623094c-365d-4e73-a6e4-f7a89846508b/volumes" Jan 26 00:36:12 crc kubenswrapper[5107]: I0126 00:36:12.279241 5107 scope.go:117] "RemoveContainer" containerID="fab8fbe342061c5cbf4058d30d524bbefc446e439009a7b3669fceb29ac3f57d" Jan 26 00:36:17 crc kubenswrapper[5107]: I0126 00:36:17.758875 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x_29213ff4-4c9b-4e6d-90be-74a8ef3334c0/util/0.log" Jan 26 00:36:17 crc kubenswrapper[5107]: I0126 00:36:17.976793 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x_29213ff4-4c9b-4e6d-90be-74a8ef3334c0/util/0.log" Jan 26 00:36:17 crc kubenswrapper[5107]: I0126 00:36:17.999784 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x_29213ff4-4c9b-4e6d-90be-74a8ef3334c0/pull/0.log" Jan 26 00:36:18 crc kubenswrapper[5107]: I0126 00:36:18.027815 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x_29213ff4-4c9b-4e6d-90be-74a8ef3334c0/pull/0.log" Jan 26 00:36:18 crc kubenswrapper[5107]: I0126 00:36:18.147468 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x_29213ff4-4c9b-4e6d-90be-74a8ef3334c0/util/0.log" Jan 26 00:36:18 crc kubenswrapper[5107]: I0126 00:36:18.191247 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x_29213ff4-4c9b-4e6d-90be-74a8ef3334c0/pull/0.log" Jan 26 00:36:18 crc kubenswrapper[5107]: I0126 00:36:18.231164 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2hm9x_29213ff4-4c9b-4e6d-90be-74a8ef3334c0/extract/0.log" Jan 26 00:36:18 crc kubenswrapper[5107]: I0126 00:36:18.337041 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq_9956eca6-8cc8-40ac-9b69-9500db778f1a/util/0.log" Jan 26 00:36:18 crc kubenswrapper[5107]: I0126 00:36:18.554769 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq_9956eca6-8cc8-40ac-9b69-9500db778f1a/pull/0.log" Jan 26 00:36:18 crc kubenswrapper[5107]: I0126 00:36:18.560172 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq_9956eca6-8cc8-40ac-9b69-9500db778f1a/util/0.log" Jan 26 00:36:18 crc kubenswrapper[5107]: I0126 00:36:18.596240 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq_9956eca6-8cc8-40ac-9b69-9500db778f1a/pull/0.log" Jan 26 00:36:18 crc kubenswrapper[5107]: I0126 00:36:18.814451 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq_9956eca6-8cc8-40ac-9b69-9500db778f1a/util/0.log" Jan 26 00:36:18 crc kubenswrapper[5107]: I0126 00:36:18.843130 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq_9956eca6-8cc8-40ac-9b69-9500db778f1a/pull/0.log" Jan 26 00:36:18 crc kubenswrapper[5107]: I0126 00:36:18.881546 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f9ssxq_9956eca6-8cc8-40ac-9b69-9500db778f1a/extract/0.log" Jan 26 00:36:19 crc kubenswrapper[5107]: I0126 00:36:19.038471 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx_2f4334a5-7577-470f-b5f7-32206240626a/util/0.log" Jan 26 00:36:19 crc kubenswrapper[5107]: I0126 00:36:19.235490 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx_2f4334a5-7577-470f-b5f7-32206240626a/util/0.log" Jan 26 00:36:19 crc kubenswrapper[5107]: I0126 00:36:19.236106 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx_2f4334a5-7577-470f-b5f7-32206240626a/pull/0.log" Jan 26 00:36:19 crc kubenswrapper[5107]: I0126 00:36:19.252928 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx_2f4334a5-7577-470f-b5f7-32206240626a/pull/0.log" Jan 26 00:36:19 crc kubenswrapper[5107]: I0126 00:36:19.422321 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx_2f4334a5-7577-470f-b5f7-32206240626a/util/0.log" Jan 26 00:36:19 crc kubenswrapper[5107]: I0126 00:36:19.440318 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx_2f4334a5-7577-470f-b5f7-32206240626a/pull/0.log" Jan 26 00:36:19 crc kubenswrapper[5107]: I0126 00:36:19.463430 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5eg9rpx_2f4334a5-7577-470f-b5f7-32206240626a/extract/0.log" Jan 26 00:36:19 crc kubenswrapper[5107]: I0126 00:36:19.624141 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw_904f82dd-7ba2-482e-b5d4-15f043ddea94/util/0.log" Jan 26 00:36:19 crc kubenswrapper[5107]: I0126 00:36:19.828004 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw_904f82dd-7ba2-482e-b5d4-15f043ddea94/pull/0.log" Jan 26 00:36:19 crc kubenswrapper[5107]: I0126 00:36:19.828280 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw_904f82dd-7ba2-482e-b5d4-15f043ddea94/util/0.log" Jan 26 00:36:19 crc kubenswrapper[5107]: I0126 00:36:19.860090 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw_904f82dd-7ba2-482e-b5d4-15f043ddea94/pull/0.log" Jan 26 00:36:20 crc kubenswrapper[5107]: I0126 00:36:20.126850 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw_904f82dd-7ba2-482e-b5d4-15f043ddea94/pull/0.log" Jan 26 00:36:20 crc kubenswrapper[5107]: I0126 00:36:20.130496 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw_904f82dd-7ba2-482e-b5d4-15f043ddea94/util/0.log" Jan 26 00:36:20 crc kubenswrapper[5107]: I0126 00:36:20.195003 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zbzkw_904f82dd-7ba2-482e-b5d4-15f043ddea94/extract/0.log" Jan 26 00:36:20 crc kubenswrapper[5107]: I0126 00:36:20.337077 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xpnt2_777fdb5a-d598-4e89-804c-c0a26fb1d077/extract-utilities/0.log" Jan 26 00:36:20 crc kubenswrapper[5107]: I0126 00:36:20.520604 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xpnt2_777fdb5a-d598-4e89-804c-c0a26fb1d077/extract-content/0.log" Jan 26 00:36:20 crc kubenswrapper[5107]: I0126 00:36:20.520639 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xpnt2_777fdb5a-d598-4e89-804c-c0a26fb1d077/extract-utilities/0.log" Jan 26 00:36:20 crc kubenswrapper[5107]: I0126 00:36:20.564224 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xpnt2_777fdb5a-d598-4e89-804c-c0a26fb1d077/extract-content/0.log" Jan 26 00:36:20 crc kubenswrapper[5107]: I0126 00:36:20.707972 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xpnt2_777fdb5a-d598-4e89-804c-c0a26fb1d077/extract-utilities/0.log" Jan 26 00:36:20 crc kubenswrapper[5107]: I0126 00:36:20.756677 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xpnt2_777fdb5a-d598-4e89-804c-c0a26fb1d077/extract-content/0.log" Jan 26 00:36:20 crc kubenswrapper[5107]: I0126 00:36:20.809859 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-56zdj_437f5b38-eba1-4df5-88b1-40368d973099/extract-utilities/0.log" Jan 26 00:36:20 crc kubenswrapper[5107]: I0126 00:36:20.869359 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xpnt2_777fdb5a-d598-4e89-804c-c0a26fb1d077/registry-server/0.log" Jan 26 00:36:21 crc kubenswrapper[5107]: I0126 00:36:21.017710 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-56zdj_437f5b38-eba1-4df5-88b1-40368d973099/extract-utilities/0.log" Jan 26 00:36:21 crc kubenswrapper[5107]: I0126 00:36:21.017711 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-56zdj_437f5b38-eba1-4df5-88b1-40368d973099/extract-content/0.log" Jan 26 00:36:21 crc kubenswrapper[5107]: I0126 00:36:21.023581 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-56zdj_437f5b38-eba1-4df5-88b1-40368d973099/extract-content/0.log" Jan 26 00:36:21 crc kubenswrapper[5107]: I0126 00:36:21.234388 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-56zdj_437f5b38-eba1-4df5-88b1-40368d973099/extract-utilities/0.log" Jan 26 00:36:21 crc kubenswrapper[5107]: I0126 00:36:21.242659 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-56zdj_437f5b38-eba1-4df5-88b1-40368d973099/extract-content/0.log" Jan 26 00:36:21 crc kubenswrapper[5107]: I0126 00:36:21.284281 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-z8mjk_f9693a56-8c67-49d4-86ef-00efbe7882a5/marketplace-operator/0.log" Jan 26 00:36:21 crc kubenswrapper[5107]: I0126 00:36:21.445147 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g95mx_cb46dfe5-5251-43ad-a7a1-7f52c860a08b/extract-utilities/0.log" Jan 26 00:36:21 crc kubenswrapper[5107]: I0126 00:36:21.545382 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-56zdj_437f5b38-eba1-4df5-88b1-40368d973099/registry-server/0.log" Jan 26 00:36:21 crc kubenswrapper[5107]: I0126 00:36:21.647548 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g95mx_cb46dfe5-5251-43ad-a7a1-7f52c860a08b/extract-utilities/0.log" Jan 26 00:36:21 crc kubenswrapper[5107]: I0126 00:36:21.680818 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g95mx_cb46dfe5-5251-43ad-a7a1-7f52c860a08b/extract-content/0.log" Jan 26 00:36:21 crc kubenswrapper[5107]: I0126 00:36:21.694483 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g95mx_cb46dfe5-5251-43ad-a7a1-7f52c860a08b/extract-content/0.log" Jan 26 00:36:21 crc kubenswrapper[5107]: I0126 00:36:21.911357 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g95mx_cb46dfe5-5251-43ad-a7a1-7f52c860a08b/extract-utilities/0.log" Jan 26 00:36:21 crc kubenswrapper[5107]: I0126 00:36:21.934360 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g95mx_cb46dfe5-5251-43ad-a7a1-7f52c860a08b/extract-content/0.log" Jan 26 00:36:22 crc kubenswrapper[5107]: I0126 00:36:22.135809 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g95mx_cb46dfe5-5251-43ad-a7a1-7f52c860a08b/registry-server/0.log" Jan 26 00:36:34 crc kubenswrapper[5107]: I0126 00:36:34.078370 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-xpxw5_123740f4-e15a-41f1-a226-52d4c99d5b2c/prometheus-operator/0.log" Jan 26 00:36:34 crc kubenswrapper[5107]: I0126 00:36:34.082826 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5fb8f8d664-d2thm_c77a1404-d97a-4b52-9272-21ff7b6fe4f7/prometheus-operator-admission-webhook/0.log" Jan 26 00:36:34 crc kubenswrapper[5107]: I0126 00:36:34.162053 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5fb8f8d664-xdd8z_10bc00e1-36a9-4698-a7d5-8d1378427b9e/prometheus-operator-admission-webhook/0.log" Jan 26 00:36:34 crc kubenswrapper[5107]: I0126 00:36:34.291259 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-dgrdx_483aa877-5602-47cb-ba02-45775e6d5cd7/perses-operator/0.log" Jan 26 00:36:34 crc kubenswrapper[5107]: I0126 00:36:34.293835 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-7ml9v_d664f8f1-6e8c-4763-b2e5-3ce3cda11786/operator/0.log" Jan 26 00:37:21 crc kubenswrapper[5107]: I0126 00:37:21.305685 5107 generic.go:358] "Generic (PLEG): container finished" podID="ca406593-5197-475d-afd4-32f36cff5a44" containerID="d2c8f0179f1d4b6b9796fc2898dafcdee049adc30b45af6f9aa230a820744dd7" exitCode=0 Jan 26 00:37:21 crc kubenswrapper[5107]: I0126 00:37:21.305789 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wzwmc/must-gather-86cgn" event={"ID":"ca406593-5197-475d-afd4-32f36cff5a44","Type":"ContainerDied","Data":"d2c8f0179f1d4b6b9796fc2898dafcdee049adc30b45af6f9aa230a820744dd7"} Jan 26 00:37:21 crc kubenswrapper[5107]: I0126 00:37:21.307298 5107 scope.go:117] "RemoveContainer" containerID="d2c8f0179f1d4b6b9796fc2898dafcdee049adc30b45af6f9aa230a820744dd7" Jan 26 00:37:21 crc kubenswrapper[5107]: I0126 00:37:21.777208 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wzwmc_must-gather-86cgn_ca406593-5197-475d-afd4-32f36cff5a44/gather/0.log" Jan 26 00:37:28 crc kubenswrapper[5107]: I0126 00:37:28.064482 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wzwmc/must-gather-86cgn"] Jan 26 00:37:28 crc kubenswrapper[5107]: I0126 00:37:28.067187 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-wzwmc/must-gather-86cgn" podUID="ca406593-5197-475d-afd4-32f36cff5a44" containerName="copy" containerID="cri-o://37c0a9505d9eb8bf6cc7fc604d257200a9a09bf2003e44637f71a3ef3aa2d527" gracePeriod=2 Jan 26 00:37:28 crc kubenswrapper[5107]: I0126 00:37:28.073821 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wzwmc/must-gather-86cgn"] Jan 26 00:37:28 crc kubenswrapper[5107]: I0126 00:37:28.088004 5107 status_manager.go:895] "Failed to get status for pod" podUID="ca406593-5197-475d-afd4-32f36cff5a44" pod="openshift-must-gather-wzwmc/must-gather-86cgn" err="pods \"must-gather-86cgn\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-wzwmc\": no relationship found between node 'crc' and this object" Jan 26 00:37:28 crc kubenswrapper[5107]: I0126 00:37:28.363988 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wzwmc_must-gather-86cgn_ca406593-5197-475d-afd4-32f36cff5a44/copy/0.log" Jan 26 00:37:28 crc kubenswrapper[5107]: I0126 00:37:28.364862 5107 generic.go:358] "Generic (PLEG): container finished" podID="ca406593-5197-475d-afd4-32f36cff5a44" containerID="37c0a9505d9eb8bf6cc7fc604d257200a9a09bf2003e44637f71a3ef3aa2d527" exitCode=143 Jan 26 00:37:28 crc kubenswrapper[5107]: I0126 00:37:28.514529 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wzwmc_must-gather-86cgn_ca406593-5197-475d-afd4-32f36cff5a44/copy/0.log" Jan 26 00:37:28 crc kubenswrapper[5107]: I0126 00:37:28.515103 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wzwmc/must-gather-86cgn" Jan 26 00:37:28 crc kubenswrapper[5107]: I0126 00:37:28.624391 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ca406593-5197-475d-afd4-32f36cff5a44-must-gather-output\") pod \"ca406593-5197-475d-afd4-32f36cff5a44\" (UID: \"ca406593-5197-475d-afd4-32f36cff5a44\") " Jan 26 00:37:28 crc kubenswrapper[5107]: I0126 00:37:28.624956 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g54m\" (UniqueName: \"kubernetes.io/projected/ca406593-5197-475d-afd4-32f36cff5a44-kube-api-access-7g54m\") pod \"ca406593-5197-475d-afd4-32f36cff5a44\" (UID: \"ca406593-5197-475d-afd4-32f36cff5a44\") " Jan 26 00:37:28 crc kubenswrapper[5107]: I0126 00:37:28.651251 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca406593-5197-475d-afd4-32f36cff5a44-kube-api-access-7g54m" (OuterVolumeSpecName: "kube-api-access-7g54m") pod "ca406593-5197-475d-afd4-32f36cff5a44" (UID: "ca406593-5197-475d-afd4-32f36cff5a44"). InnerVolumeSpecName "kube-api-access-7g54m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:37:28 crc kubenswrapper[5107]: I0126 00:37:28.689302 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca406593-5197-475d-afd4-32f36cff5a44-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ca406593-5197-475d-afd4-32f36cff5a44" (UID: "ca406593-5197-475d-afd4-32f36cff5a44"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:37:28 crc kubenswrapper[5107]: I0126 00:37:28.726468 5107 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ca406593-5197-475d-afd4-32f36cff5a44-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 00:37:28 crc kubenswrapper[5107]: I0126 00:37:28.726524 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7g54m\" (UniqueName: \"kubernetes.io/projected/ca406593-5197-475d-afd4-32f36cff5a44-kube-api-access-7g54m\") on node \"crc\" DevicePath \"\"" Jan 26 00:37:29 crc kubenswrapper[5107]: I0126 00:37:29.375274 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wzwmc_must-gather-86cgn_ca406593-5197-475d-afd4-32f36cff5a44/copy/0.log" Jan 26 00:37:29 crc kubenswrapper[5107]: I0126 00:37:29.377341 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wzwmc/must-gather-86cgn" Jan 26 00:37:29 crc kubenswrapper[5107]: I0126 00:37:29.377441 5107 scope.go:117] "RemoveContainer" containerID="37c0a9505d9eb8bf6cc7fc604d257200a9a09bf2003e44637f71a3ef3aa2d527" Jan 26 00:37:29 crc kubenswrapper[5107]: I0126 00:37:29.413640 5107 scope.go:117] "RemoveContainer" containerID="d2c8f0179f1d4b6b9796fc2898dafcdee049adc30b45af6f9aa230a820744dd7" Jan 26 00:37:30 crc kubenswrapper[5107]: I0126 00:37:30.124019 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca406593-5197-475d-afd4-32f36cff5a44" path="/var/lib/kubelet/pods/ca406593-5197-475d-afd4-32f36cff5a44/volumes" Jan 26 00:38:00 crc kubenswrapper[5107]: I0126 00:38:00.148712 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489798-nxnpp"] Jan 26 00:38:00 crc kubenswrapper[5107]: I0126 00:38:00.150378 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ca406593-5197-475d-afd4-32f36cff5a44" containerName="copy" Jan 26 00:38:00 crc kubenswrapper[5107]: I0126 00:38:00.150427 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca406593-5197-475d-afd4-32f36cff5a44" containerName="copy" Jan 26 00:38:00 crc kubenswrapper[5107]: I0126 00:38:00.150453 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ea598b86-0f14-44bc-ae03-80bfaf06e0a6" containerName="oc" Jan 26 00:38:00 crc kubenswrapper[5107]: I0126 00:38:00.150460 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea598b86-0f14-44bc-ae03-80bfaf06e0a6" containerName="oc" Jan 26 00:38:00 crc kubenswrapper[5107]: I0126 00:38:00.150482 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ca406593-5197-475d-afd4-32f36cff5a44" containerName="gather" Jan 26 00:38:00 crc kubenswrapper[5107]: I0126 00:38:00.150489 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca406593-5197-475d-afd4-32f36cff5a44" containerName="gather" Jan 26 00:38:00 crc kubenswrapper[5107]: I0126 00:38:00.150618 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="ca406593-5197-475d-afd4-32f36cff5a44" containerName="copy" Jan 26 00:38:00 crc kubenswrapper[5107]: I0126 00:38:00.150635 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="ea598b86-0f14-44bc-ae03-80bfaf06e0a6" containerName="oc" Jan 26 00:38:00 crc kubenswrapper[5107]: I0126 00:38:00.150644 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="ca406593-5197-475d-afd4-32f36cff5a44" containerName="gather" Jan 26 00:38:00 crc kubenswrapper[5107]: I0126 00:38:00.158299 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489798-nxnpp"] Jan 26 00:38:00 crc kubenswrapper[5107]: I0126 00:38:00.158465 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489798-nxnpp" Jan 26 00:38:00 crc kubenswrapper[5107]: I0126 00:38:00.162294 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:38:00 crc kubenswrapper[5107]: I0126 00:38:00.162581 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:38:00 crc kubenswrapper[5107]: I0126 00:38:00.162607 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-96gbq\"" Jan 26 00:38:00 crc kubenswrapper[5107]: I0126 00:38:00.231622 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppsjv\" (UniqueName: \"kubernetes.io/projected/301bfca5-6465-4735-bff1-37acad855d22-kube-api-access-ppsjv\") pod \"auto-csr-approver-29489798-nxnpp\" (UID: \"301bfca5-6465-4735-bff1-37acad855d22\") " pod="openshift-infra/auto-csr-approver-29489798-nxnpp" Jan 26 00:38:00 crc kubenswrapper[5107]: I0126 00:38:00.333308 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ppsjv\" (UniqueName: \"kubernetes.io/projected/301bfca5-6465-4735-bff1-37acad855d22-kube-api-access-ppsjv\") pod \"auto-csr-approver-29489798-nxnpp\" (UID: \"301bfca5-6465-4735-bff1-37acad855d22\") " pod="openshift-infra/auto-csr-approver-29489798-nxnpp" Jan 26 00:38:00 crc kubenswrapper[5107]: I0126 00:38:00.356099 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppsjv\" (UniqueName: \"kubernetes.io/projected/301bfca5-6465-4735-bff1-37acad855d22-kube-api-access-ppsjv\") pod \"auto-csr-approver-29489798-nxnpp\" (UID: \"301bfca5-6465-4735-bff1-37acad855d22\") " pod="openshift-infra/auto-csr-approver-29489798-nxnpp" Jan 26 00:38:00 crc kubenswrapper[5107]: I0126 00:38:00.479320 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489798-nxnpp" Jan 26 00:38:00 crc kubenswrapper[5107]: I0126 00:38:00.724538 5107 patch_prober.go:28] interesting pod/machine-config-daemon-94c4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:38:00 crc kubenswrapper[5107]: I0126 00:38:00.725115 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:38:00 crc kubenswrapper[5107]: I0126 00:38:00.915487 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489798-nxnpp"] Jan 26 00:38:01 crc kubenswrapper[5107]: I0126 00:38:01.640672 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489798-nxnpp" event={"ID":"301bfca5-6465-4735-bff1-37acad855d22","Type":"ContainerStarted","Data":"c42ac6c648b82a149a9c3e1a5f64a2892a7dbac92ad4d58b1d85e241204972a5"} Jan 26 00:38:02 crc kubenswrapper[5107]: I0126 00:38:02.650079 5107 generic.go:358] "Generic (PLEG): container finished" podID="301bfca5-6465-4735-bff1-37acad855d22" containerID="128dd6e22d32e70940d34af4b064b93aa8222d5f5a4222d9b14a6bac6eba26f0" exitCode=0 Jan 26 00:38:02 crc kubenswrapper[5107]: I0126 00:38:02.650184 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489798-nxnpp" event={"ID":"301bfca5-6465-4735-bff1-37acad855d22","Type":"ContainerDied","Data":"128dd6e22d32e70940d34af4b064b93aa8222d5f5a4222d9b14a6bac6eba26f0"} Jan 26 00:38:03 crc kubenswrapper[5107]: I0126 00:38:03.878367 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489798-nxnpp" Jan 26 00:38:04 crc kubenswrapper[5107]: I0126 00:38:04.012528 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppsjv\" (UniqueName: \"kubernetes.io/projected/301bfca5-6465-4735-bff1-37acad855d22-kube-api-access-ppsjv\") pod \"301bfca5-6465-4735-bff1-37acad855d22\" (UID: \"301bfca5-6465-4735-bff1-37acad855d22\") " Jan 26 00:38:04 crc kubenswrapper[5107]: I0126 00:38:04.021079 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301bfca5-6465-4735-bff1-37acad855d22-kube-api-access-ppsjv" (OuterVolumeSpecName: "kube-api-access-ppsjv") pod "301bfca5-6465-4735-bff1-37acad855d22" (UID: "301bfca5-6465-4735-bff1-37acad855d22"). InnerVolumeSpecName "kube-api-access-ppsjv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:38:04 crc kubenswrapper[5107]: I0126 00:38:04.114559 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ppsjv\" (UniqueName: \"kubernetes.io/projected/301bfca5-6465-4735-bff1-37acad855d22-kube-api-access-ppsjv\") on node \"crc\" DevicePath \"\"" Jan 26 00:38:04 crc kubenswrapper[5107]: I0126 00:38:04.668913 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489798-nxnpp" event={"ID":"301bfca5-6465-4735-bff1-37acad855d22","Type":"ContainerDied","Data":"c42ac6c648b82a149a9c3e1a5f64a2892a7dbac92ad4d58b1d85e241204972a5"} Jan 26 00:38:04 crc kubenswrapper[5107]: I0126 00:38:04.668976 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c42ac6c648b82a149a9c3e1a5f64a2892a7dbac92ad4d58b1d85e241204972a5" Jan 26 00:38:04 crc kubenswrapper[5107]: I0126 00:38:04.669065 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489798-nxnpp" Jan 26 00:38:04 crc kubenswrapper[5107]: I0126 00:38:04.956016 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489792-9kxhk"] Jan 26 00:38:04 crc kubenswrapper[5107]: I0126 00:38:04.961048 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489792-9kxhk"] Jan 26 00:38:06 crc kubenswrapper[5107]: I0126 00:38:06.124781 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="261140c0-a32c-4656-914c-7b6c9f0c8968" path="/var/lib/kubelet/pods/261140c0-a32c-4656-914c-7b6c9f0c8968/volumes" Jan 26 00:38:12 crc kubenswrapper[5107]: I0126 00:38:12.472470 5107 scope.go:117] "RemoveContainer" containerID="c39cba9455b3ee0cdc317f027086cb8fe1123bcfb8b7290d40d1b59cbbb93a79" Jan 26 00:38:30 crc kubenswrapper[5107]: I0126 00:38:30.724225 5107 patch_prober.go:28] interesting pod/machine-config-daemon-94c4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:38:30 crc kubenswrapper[5107]: I0126 00:38:30.724984 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:39:00 crc kubenswrapper[5107]: I0126 00:39:00.723914 5107 patch_prober.go:28] interesting pod/machine-config-daemon-94c4c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:39:00 crc kubenswrapper[5107]: I0126 00:39:00.724652 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:39:00 crc kubenswrapper[5107]: I0126 00:39:00.724727 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" Jan 26 00:39:00 crc kubenswrapper[5107]: I0126 00:39:00.725578 5107 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e6bc460ea0d650ceadcf3566ed26f6cdb646a7d2473d1a43396fe063f532da3"} pod="openshift-machine-config-operator/machine-config-daemon-94c4c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:39:00 crc kubenswrapper[5107]: I0126 00:39:00.725652 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" containerName="machine-config-daemon" containerID="cri-o://2e6bc460ea0d650ceadcf3566ed26f6cdb646a7d2473d1a43396fe063f532da3" gracePeriod=600 Jan 26 00:39:00 crc kubenswrapper[5107]: E0126 00:39:00.862022 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-94c4c_openshift-machine-config-operator(7d907601-1852-43f9-8a70-ef4e71351e81)\"" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" Jan 26 00:39:01 crc kubenswrapper[5107]: I0126 00:39:01.178639 5107 generic.go:358] "Generic (PLEG): container finished" podID="7d907601-1852-43f9-8a70-ef4e71351e81" containerID="2e6bc460ea0d650ceadcf3566ed26f6cdb646a7d2473d1a43396fe063f532da3" exitCode=0 Jan 26 00:39:01 crc kubenswrapper[5107]: I0126 00:39:01.178717 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" event={"ID":"7d907601-1852-43f9-8a70-ef4e71351e81","Type":"ContainerDied","Data":"2e6bc460ea0d650ceadcf3566ed26f6cdb646a7d2473d1a43396fe063f532da3"} Jan 26 00:39:01 crc kubenswrapper[5107]: I0126 00:39:01.178814 5107 scope.go:117] "RemoveContainer" containerID="68a6136f0fb49dc0b05970bd45082071005debf7e7794eeb59067e6ae923b996" Jan 26 00:39:01 crc kubenswrapper[5107]: I0126 00:39:01.179471 5107 scope.go:117] "RemoveContainer" containerID="2e6bc460ea0d650ceadcf3566ed26f6cdb646a7d2473d1a43396fe063f532da3" Jan 26 00:39:01 crc kubenswrapper[5107]: E0126 00:39:01.179831 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-94c4c_openshift-machine-config-operator(7d907601-1852-43f9-8a70-ef4e71351e81)\"" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" Jan 26 00:39:13 crc kubenswrapper[5107]: I0126 00:39:13.112982 5107 scope.go:117] "RemoveContainer" containerID="2e6bc460ea0d650ceadcf3566ed26f6cdb646a7d2473d1a43396fe063f532da3" Jan 26 00:39:13 crc kubenswrapper[5107]: E0126 00:39:13.113930 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-94c4c_openshift-machine-config-operator(7d907601-1852-43f9-8a70-ef4e71351e81)\"" pod="openshift-machine-config-operator/machine-config-daemon-94c4c" podUID="7d907601-1852-43f9-8a70-ef4e71351e81" Jan 26 00:39:17 crc kubenswrapper[5107]: I0126 00:39:17.338303 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_dd22b8af-ee94-4947-914f-6405029c6104/docker-build/0.log" Jan 26 00:39:17 crc kubenswrapper[5107]: I0126 00:39:17.340355 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_dd22b8af-ee94-4947-914f-6405029c6104/docker-build/0.log" Jan 26 00:39:17 crc kubenswrapper[5107]: I0126 00:39:17.340926 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_6194cf20-381a-4030-a802-413bdf580aca/docker-build/0.log" Jan 26 00:39:17 crc kubenswrapper[5107]: I0126 00:39:17.342315 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_6194cf20-381a-4030-a802-413bdf580aca/docker-build/0.log" Jan 26 00:39:17 crc kubenswrapper[5107]: I0126 00:39:17.343241 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_56677a28-74b1-42c7-a42b-1aaf1ebcdc8a/docker-build/0.log" Jan 26 00:39:17 crc kubenswrapper[5107]: I0126 00:39:17.344902 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_56677a28-74b1-42c7-a42b-1aaf1ebcdc8a/docker-build/0.log" Jan 26 00:39:17 crc kubenswrapper[5107]: I0126 00:39:17.346496 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_e49e6c4b-b61e-40bf-8b52-2abf782b22df/docker-build/0.log" Jan 26 00:39:17 crc kubenswrapper[5107]: I0126 00:39:17.348385 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_e49e6c4b-b61e-40bf-8b52-2abf782b22df/docker-build/0.log" Jan 26 00:39:17 crc kubenswrapper[5107]: I0126 00:39:17.491629 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f2mpq_2e5342d5-2d0c-458d-94b7-25c802ce298a/kube-multus/0.log" Jan 26 00:39:17 crc kubenswrapper[5107]: I0126 00:39:17.491853 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f2mpq_2e5342d5-2d0c-458d-94b7-25c802ce298a/kube-multus/0.log" Jan 26 00:39:17 crc kubenswrapper[5107]: I0126 00:39:17.498837 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Jan 26 00:39:17 crc kubenswrapper[5107]: I0126 00:39:17.499272 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Jan 26 00:39:17 crc kubenswrapper[5107]: I0126 00:39:17.506123 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 26 00:39:17 crc kubenswrapper[5107]: I0126 00:39:17.506173 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log"